Merge branch 'topic/seq-filter-cleanup' into for-next

Pull ALSA sequencer cleanup.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
This commit is contained in:
Takashi Iwai 2024-08-19 10:48:39 +02:00
commit 41776e4008
430 changed files with 4910 additions and 2855 deletions

View File

@ -562,7 +562,8 @@ Description: Control Symmetric Multi Threading (SMT)
================ ========================================= ================ =========================================
If control status is "forceoff" or "notsupported" writes If control status is "forceoff" or "notsupported" writes
are rejected. are rejected. Note that enabling SMT on PowerPC skips
offline cores.
What: /sys/devices/system/cpu/cpuX/power/energy_perf_bias What: /sys/devices/system/cpu/cpuX/power/energy_perf_bias
Date: March 2019 Date: March 2019

View File

@ -162,13 +162,14 @@ iv_large_sectors
Module parameters:: Module parameters::
max_read_size
max_write_size max_read_size
Maximum size of read or write requests. When a request larger than this size max_write_size
is received, dm-crypt will split the request. The splitting improves Maximum size of read or write requests. When a request larger than this size
concurrency (the split requests could be encrypted in parallel by multiple is received, dm-crypt will split the request. The splitting improves
cores), but it also causes overhead. The user should tune these parameters to concurrency (the split requests could be encrypted in parallel by multiple
fit the actual workload. cores), but it also causes overhead. The user should tune these parameters to
fit the actual workload.
Example scripts Example scripts

View File

@ -239,25 +239,33 @@ The following keys are defined:
ratified in commit 98918c844281 ("Merge pull request #1217 from ratified in commit 98918c844281 ("Merge pull request #1217 from
riscv/zawrs") of riscv-isa-manual. riscv/zawrs") of riscv-isa-manual.
* :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: A bitmask that contains performance * :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: Deprecated. Returns similar values to
information about the selected set of processors. :c:macro:`RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF`, but the key was
mistakenly classified as a bitmask rather than a value.
* :c:macro:`RISCV_HWPROBE_MISALIGNED_UNKNOWN`: The performance of misaligned * :c:macro:`RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF`: An enum value describing
accesses is unknown. the performance of misaligned scalar native word accesses on the selected set
of processors.
* :c:macro:`RISCV_HWPROBE_MISALIGNED_EMULATED`: Misaligned accesses are * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN`: The performance of
emulated via software, either in or below the kernel. These accesses are misaligned scalar accesses is unknown.
always extremely slow.
* :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned accesses are slower * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED`: Misaligned scalar
than equivalent byte accesses. Misaligned accesses may be supported accesses are emulated via software, either in or below the kernel. These
directly in hardware, or trapped and emulated by software. accesses are always extremely slow.
* :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned accesses are faster * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW`: Misaligned scalar native
than equivalent byte accesses. word sized accesses are slower than the equivalent quantity of byte
accesses. Misaligned accesses may be supported directly in hardware, or
trapped and emulated by software.
* :c:macro:`RISCV_HWPROBE_MISALIGNED_UNSUPPORTED`: Misaligned accesses are * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_FAST`: Misaligned scalar native
not supported at all and will generate a misaligned address fault. word sized accesses are faster than the equivalent quantity of byte
accesses.
* :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_UNSUPPORTED`: Misaligned scalar
accesses are not supported at all and will generate a misaligned address
fault.
* :c:macro:`RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE`: An unsigned int which * :c:macro:`RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE`: An unsigned int which
represents the size of the Zicboz block in bytes. represents the size of the Zicboz block in bytes.

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Display Clock & Reset Controller on SM6350 title: Qualcomm Display Clock & Reset Controller on SM6350
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@somainline.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm display clock control module provides the clocks, resets and power Qualcomm display clock control module provides the clocks, resets and power

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Global Clock & Reset Controller on MSM8994 title: Qualcomm Global Clock & Reset Controller on MSM8994
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@somainline.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm global clock control module provides the clocks, resets and power Qualcomm global clock control module provides the clocks, resets and power

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Global Clock & Reset Controller on SM6125 title: Qualcomm Global Clock & Reset Controller on SM6125
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@somainline.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm global clock control module provides the clocks, resets and power Qualcomm global clock control module provides the clocks, resets and power

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Global Clock & Reset Controller on SM6350 title: Qualcomm Global Clock & Reset Controller on SM6350
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@somainline.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm global clock control module provides the clocks, resets and power Qualcomm global clock control module provides the clocks, resets and power

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Graphics Clock & Reset Controller on SM6115 title: Qualcomm Graphics Clock & Reset Controller on SM6115
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm graphics clock control module provides clocks, resets and power Qualcomm graphics clock control module provides clocks, resets and power

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Graphics Clock & Reset Controller on SM6125 title: Qualcomm Graphics Clock & Reset Controller on SM6125
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm graphics clock control module provides clocks and power domains on Qualcomm graphics clock control module provides clocks and power domains on

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Camera Clock & Reset Controller on SM6350 title: Qualcomm Camera Clock & Reset Controller on SM6350
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm camera clock control module provides the clocks, resets and power Qualcomm camera clock control module provides the clocks, resets and power

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Display Clock & Reset Controller on SM6375 title: Qualcomm Display Clock & Reset Controller on SM6375
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm display clock control module provides the clocks, resets and power Qualcomm display clock control module provides the clocks, resets and power

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Global Clock & Reset Controller on SM6375 title: Qualcomm Global Clock & Reset Controller on SM6375
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@somainline.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm global clock control module provides the clocks, resets and power Qualcomm global clock control module provides the clocks, resets and power

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Graphics Clock & Reset Controller on SM6375 title: Qualcomm Graphics Clock & Reset Controller on SM6375
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm graphics clock control module provides clocks, resets and power Qualcomm graphics clock control module provides clocks, resets and power

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm SM8350 Video Clock & Reset Controller title: Qualcomm SM8350 Video Clock & Reset Controller
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm video clock control module provides the clocks, resets and power Qualcomm video clock control module provides the clocks, resets and power

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Graphics Clock & Reset Controller on SM8450 title: Qualcomm Graphics Clock & Reset Controller on SM8450
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm graphics clock control module provides the clocks, resets and power Qualcomm graphics clock control module provides the clocks, resets and power

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm SM6375 Display MDSS title: Qualcomm SM6375 Display MDSS
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: description:
SM6375 MSM Mobile Display Subsystem (MDSS), which encapsulates sub-blocks SM6375 MSM Mobile Display Subsystem (MDSS), which encapsulates sub-blocks

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: ASUS Z00T TM5P5 NT35596 5.5" 1080×1920 LCD Panel title: ASUS Z00T TM5P5 NT35596 5.5" 1080×1920 LCD Panel
maintainers: maintainers:
- Konrad Dybcio <konradybcio@gmail.com> - Konrad Dybcio <konradybcio@kernel.org>
description: |+ description: |+
This panel seems to only be found in the Asus Z00T This panel seems to only be found in the Asus Z00T

View File

@ -18,12 +18,12 @@ properties:
# Samsung 13.3" FHD (1920x1080 pixels) eDP AMOLED panel # Samsung 13.3" FHD (1920x1080 pixels) eDP AMOLED panel
- const: samsung,atna33xc20 - const: samsung,atna33xc20
- items: - items:
- enum: - enum:
# Samsung 14.5" WQXGA+ (2880x1800 pixels) eDP AMOLED panel # Samsung 14.5" WQXGA+ (2880x1800 pixels) eDP AMOLED panel
- samsung,atna45af01 - samsung,atna45af01
# Samsung 14.5" 3K (2944x1840 pixels) eDP AMOLED panel # Samsung 14.5" 3K (2944x1840 pixels) eDP AMOLED panel
- samsung,atna45dc02 - samsung,atna45dc02
- const: samsung,atna33xc20 - const: samsung,atna33xc20
enable-gpios: true enable-gpios: true
port: true port: true

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Sony TD4353 JDI 5 / 5.7" 2160x1080 MIPI-DSI Panel title: Sony TD4353 JDI 5 / 5.7" 2160x1080 MIPI-DSI Panel
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@somainline.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
The Sony TD4353 JDI is a 5 (XZ2c) / 5.7 (XZ2) inch 2160x1080 The Sony TD4353 JDI is a 5 (XZ2c) / 5.7 (XZ2) inch 2160x1080

View File

@ -28,6 +28,7 @@ properties:
- anvo,anv32e61w - anvo,anv32e61w
- atmel,at25256B - atmel,at25256B
- fujitsu,mb85rs1mt - fujitsu,mb85rs1mt
- fujitsu,mb85rs256
- fujitsu,mb85rs64 - fujitsu,mb85rs64
- microchip,at25160bn - microchip,at25160bn
- microchip,25lc040 - microchip,25lc040

View File

@ -8,7 +8,7 @@ title: Qualcomm RPMh Network-On-Chip Interconnect on SC7280
maintainers: maintainers:
- Bjorn Andersson <andersson@kernel.org> - Bjorn Andersson <andersson@kernel.org>
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
RPMh interconnect providers support system bandwidth requirements through RPMh interconnect providers support system bandwidth requirements through

View File

@ -8,7 +8,7 @@ title: Qualcomm RPMh Network-On-Chip Interconnect on SC8280XP
maintainers: maintainers:
- Bjorn Andersson <andersson@kernel.org> - Bjorn Andersson <andersson@kernel.org>
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
RPMh interconnect providers support system bandwidth requirements through RPMh interconnect providers support system bandwidth requirements through

View File

@ -8,7 +8,7 @@ title: Qualcomm RPMh Network-On-Chip Interconnect on SM8450
maintainers: maintainers:
- Bjorn Andersson <andersson@kernel.org> - Bjorn Andersson <andersson@kernel.org>
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
RPMh interconnect providers support system bandwidth requirements through RPMh interconnect providers support system bandwidth requirements through

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Technologies legacy IOMMU implementations title: Qualcomm Technologies legacy IOMMU implementations
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
Qualcomm "B" family devices which are not compatible with arm-smmu have Qualcomm "B" family devices which are not compatible with arm-smmu have

View File

@ -38,6 +38,10 @@ properties:
managed: true managed: true
phys:
description: A reference to the SerDes lane(s)
maxItems: 1
required: required:
- reg - reg

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Technologies, Inc. MDM9607 TLMM block title: Qualcomm Technologies, Inc. MDM9607 TLMM block
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@somainline.org> - Konrad Dybcio <konradybcio@kernel.org>
description: description:
Top Level Mode Multiplexer pin controller in Qualcomm MDM9607 SoC. Top Level Mode Multiplexer pin controller in Qualcomm MDM9607 SoC.

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Technologies, Inc. SM6350 TLMM block title: Qualcomm Technologies, Inc. SM6350 TLMM block
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@somainline.org> - Konrad Dybcio <konradybcio@kernel.org>
description: description:
Top Level Mode Multiplexer pin controller in Qualcomm SM6350 SoC. Top Level Mode Multiplexer pin controller in Qualcomm SM6350 SoC.

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Technologies, Inc. SM6375 TLMM block title: Qualcomm Technologies, Inc. SM6375 TLMM block
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@somainline.org> - Konrad Dybcio <konradybcio@kernel.org>
description: description:
Top Level Mode Multiplexer pin controller in Qualcomm SM6375 SoC. Top Level Mode Multiplexer pin controller in Qualcomm SM6375 SoC.

View File

@ -8,7 +8,7 @@ title: Qualcomm Resource Power Manager (RPM) Processor/Subsystem
maintainers: maintainers:
- Bjorn Andersson <andersson@kernel.org> - Bjorn Andersson <andersson@kernel.org>
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
- Stephan Gerhold <stephan@gerhold.net> - Stephan Gerhold <stephan@gerhold.net>
description: | description: |

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm Technologies, Inc. (QTI) RPM Master Stats title: Qualcomm Technologies, Inc. (QTI) RPM Master Stats
maintainers: maintainers:
- Konrad Dybcio <konrad.dybcio@linaro.org> - Konrad Dybcio <konradybcio@kernel.org>
description: | description: |
The Qualcomm RPM (Resource Power Manager) architecture includes a concept The Qualcomm RPM (Resource Power Manager) architecture includes a concept

View File

@ -318,10 +318,10 @@ where the columns are:
Debugging Debugging
========= =========
If CONFIG_FSCACHE_DEBUG is enabled, the FS-Cache facility can have runtime If CONFIG_NETFS_DEBUG is enabled, the FS-Cache facility and NETFS support can
debugging enabled by adjusting the value in:: have runtime debugging enabled by adjusting the value in::
/sys/module/fscache/parameters/debug /sys/module/netfs/parameters/debug
This is a bitmask of debugging streams to enable: This is a bitmask of debugging streams to enable:
@ -343,6 +343,6 @@ This is a bitmask of debugging streams to enable:
The appropriate set of values should be OR'd together and the result written to The appropriate set of values should be OR'd together and the result written to
the control file. For example:: the control file. For example::
echo $((1|8|512)) >/sys/module/fscache/parameters/debug echo $((1|8|512)) >/sys/module/netfs/parameters/debug
will turn on all function entry debugging. will turn on all function entry debugging.

View File

@ -2592,7 +2592,7 @@ Specifically:
0x6030 0000 0010 004a SPSR_ABT 64 spsr[KVM_SPSR_ABT] 0x6030 0000 0010 004a SPSR_ABT 64 spsr[KVM_SPSR_ABT]
0x6030 0000 0010 004c SPSR_UND 64 spsr[KVM_SPSR_UND] 0x6030 0000 0010 004c SPSR_UND 64 spsr[KVM_SPSR_UND]
0x6030 0000 0010 004e SPSR_IRQ 64 spsr[KVM_SPSR_IRQ] 0x6030 0000 0010 004e SPSR_IRQ 64 spsr[KVM_SPSR_IRQ]
0x6060 0000 0010 0050 SPSR_FIQ 64 spsr[KVM_SPSR_FIQ] 0x6030 0000 0010 0050 SPSR_FIQ 64 spsr[KVM_SPSR_FIQ]
0x6040 0000 0010 0054 V0 128 fp_regs.vregs[0] [1]_ 0x6040 0000 0010 0054 V0 128 fp_regs.vregs[0] [1]_
0x6040 0000 0010 0058 V1 128 fp_regs.vregs[1] [1]_ 0x6040 0000 0010 0058 V1 128 fp_regs.vregs[1] [1]_
... ...

View File

@ -2,7 +2,7 @@
VERSION = 6 VERSION = 6
PATCHLEVEL = 11 PATCHLEVEL = 11
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc3 EXTRAVERSION = -rc4
NAME = Baby Opossum Posse NAME = Baby Opossum Posse
# *DOCUMENTATION* # *DOCUMENTATION*
@ -1963,7 +1963,7 @@ tags TAGS cscope gtags: FORCE
# Protocol). # Protocol).
PHONY += rust-analyzer PHONY += rust-analyzer
rust-analyzer: rust-analyzer:
$(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh +$(Q)$(CONFIG_SHELL) $(srctree)/scripts/rust_is_available.sh
$(Q)$(MAKE) $(build)=rust $@ $(Q)$(MAKE) $(build)=rust $@
# Script to generate missing namespace dependencies # Script to generate missing namespace dependencies

View File

@ -1109,7 +1109,7 @@ void ecard_remove_driver(struct ecard_driver *drv)
driver_unregister(&drv->drv); driver_unregister(&drv->drv);
} }
static int ecard_match(struct device *_dev, struct device_driver *_drv) static int ecard_match(struct device *_dev, const struct device_driver *_drv)
{ {
struct expansion_card *ec = ECARD_DEV(_dev); struct expansion_card *ec = ECARD_DEV(_dev);
struct ecard_driver *drv = ECARD_DRV(_drv); struct ecard_driver *drv = ECARD_DRV(_drv);

View File

@ -104,7 +104,7 @@ alternative_else_nop_endif
#define __ptrauth_save_key(ctxt, key) \ #define __ptrauth_save_key(ctxt, key) \
do { \ do { \
u64 __val; \ u64 __val; \
__val = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \ __val = read_sysreg_s(SYS_ ## key ## KEYLO_EL1); \
ctxt_sys_reg(ctxt, key ## KEYLO_EL1) = __val; \ ctxt_sys_reg(ctxt, key ## KEYLO_EL1) = __val; \
__val = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \ __val = read_sysreg_s(SYS_ ## key ## KEYHI_EL1); \

View File

@ -188,7 +188,7 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
#define __get_mem_asm(load, reg, x, addr, label, type) \ #define __get_mem_asm(load, reg, x, addr, label, type) \
asm_goto_output( \ asm_goto_output( \
"1: " load " " reg "0, [%1]\n" \ "1: " load " " reg "0, [%1]\n" \
_ASM_EXTABLE_##type##ACCESS_ERR(1b, %l2, %w0) \ _ASM_EXTABLE_##type##ACCESS(1b, %l2) \
: "=r" (x) \ : "=r" (x) \
: "r" (addr) : : label) : "r" (addr) : : label)
#else #else

View File

@ -27,7 +27,7 @@
#include <asm/numa.h> #include <asm/numa.h>
static int acpi_early_node_map[NR_CPUS] __initdata = { NUMA_NO_NODE }; static int acpi_early_node_map[NR_CPUS] __initdata = { [0 ... NR_CPUS - 1] = NUMA_NO_NODE };
int __init acpi_numa_get_nid(unsigned int cpu) int __init acpi_numa_get_nid(unsigned int cpu)
{ {

View File

@ -355,9 +355,6 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)
smp_init_cpus(); smp_init_cpus();
smp_build_mpidr_hash(); smp_build_mpidr_hash();
/* Init percpu seeds for random tags after cpus are set up. */
kasan_init_sw_tags();
#ifdef CONFIG_ARM64_SW_TTBR0_PAN #ifdef CONFIG_ARM64_SW_TTBR0_PAN
/* /*
* Make sure init_thread_info.ttbr0 always generates translation * Make sure init_thread_info.ttbr0 always generates translation

View File

@ -467,6 +467,8 @@ void __init smp_prepare_boot_cpu(void)
init_gic_priority_masking(); init_gic_priority_masking();
kasan_init_hw_tags(); kasan_init_hw_tags();
/* Init percpu seeds for random tags after cpus are set up. */
kasan_init_sw_tags();
} }
/* /*

View File

@ -19,6 +19,7 @@ if VIRTUALIZATION
menuconfig KVM menuconfig KVM
bool "Kernel-based Virtual Machine (KVM) support" bool "Kernel-based Virtual Machine (KVM) support"
depends on AS_HAS_ARMV8_4
select KVM_COMMON select KVM_COMMON
select KVM_GENERIC_HARDWARE_ENABLING select KVM_GENERIC_HARDWARE_ENABLING
select KVM_GENERIC_MMU_NOTIFIER select KVM_GENERIC_MMU_NOTIFIER

View File

@ -10,6 +10,9 @@ include $(srctree)/virt/kvm/Makefile.kvm
obj-$(CONFIG_KVM) += kvm.o obj-$(CONFIG_KVM) += kvm.o
obj-$(CONFIG_KVM) += hyp/ obj-$(CONFIG_KVM) += hyp/
CFLAGS_sys_regs.o += -Wno-override-init
CFLAGS_handle_exit.o += -Wno-override-init
kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
inject_fault.o va_layout.o handle_exit.o \ inject_fault.o va_layout.o handle_exit.o \
guest.o debug.o reset.o sys_regs.o stacktrace.o \ guest.o debug.o reset.o sys_regs.o stacktrace.o \

View File

@ -164,6 +164,7 @@ static int kvm_arm_default_max_vcpus(void)
/** /**
* kvm_arch_init_vm - initializes a VM data structure * kvm_arch_init_vm - initializes a VM data structure
* @kvm: pointer to the KVM struct * @kvm: pointer to the KVM struct
* @type: kvm device type
*/ */
int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
{ {
@ -521,10 +522,10 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
static void vcpu_set_pauth_traps(struct kvm_vcpu *vcpu) static void vcpu_set_pauth_traps(struct kvm_vcpu *vcpu)
{ {
if (vcpu_has_ptrauth(vcpu)) { if (vcpu_has_ptrauth(vcpu) && !is_protected_kvm_enabled()) {
/* /*
* Either we're running running an L2 guest, and the API/APK * Either we're running an L2 guest, and the API/APK bits come
* bits come from L1's HCR_EL2, or API/APK are both set. * from L1's HCR_EL2, or API/APK are both set.
*/ */
if (unlikely(vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))) { if (unlikely(vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))) {
u64 val; u64 val;
@ -541,16 +542,10 @@ static void vcpu_set_pauth_traps(struct kvm_vcpu *vcpu)
* Save the host keys if there is any chance for the guest * Save the host keys if there is any chance for the guest
* to use pauth, as the entry code will reload the guest * to use pauth, as the entry code will reload the guest
* keys in that case. * keys in that case.
* Protected mode is the exception to that rule, as the
* entry into the EL2 code eagerly switch back and forth
* between host and hyp keys (and kvm_hyp_ctxt is out of
* reach anyway).
*/ */
if (is_protected_kvm_enabled())
return;
if (vcpu->arch.hcr_el2 & (HCR_API | HCR_APK)) { if (vcpu->arch.hcr_el2 & (HCR_API | HCR_APK)) {
struct kvm_cpu_context *ctxt; struct kvm_cpu_context *ctxt;
ctxt = this_cpu_ptr_hyp_sym(kvm_hyp_ctxt); ctxt = this_cpu_ptr_hyp_sym(kvm_hyp_ctxt);
ptrauth_save_keys(ctxt); ptrauth_save_keys(ctxt);
} }

View File

@ -27,7 +27,6 @@
#include <asm/kvm_hyp.h> #include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h> #include <asm/kvm_mmu.h>
#include <asm/kvm_nested.h> #include <asm/kvm_nested.h>
#include <asm/kvm_ptrauth.h>
#include <asm/fpsimd.h> #include <asm/fpsimd.h>
#include <asm/debug-monitors.h> #include <asm/debug-monitors.h>
#include <asm/processor.h> #include <asm/processor.h>

View File

@ -20,6 +20,8 @@ HOST_EXTRACFLAGS += -I$(objtree)/include
lib-objs := clear_page.o copy_page.o memcpy.o memset.o lib-objs := clear_page.o copy_page.o memcpy.o memset.o
lib-objs := $(addprefix ../../../lib/, $(lib-objs)) lib-objs := $(addprefix ../../../lib/, $(lib-objs))
CFLAGS_switch.nvhe.o += -Wno-override-init
hyp-obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \ hyp-obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o hyp-init.o host.o \
hyp-main.o hyp-smp.o psci-relay.o early_alloc.o page_alloc.o \ hyp-main.o hyp-smp.o psci-relay.o early_alloc.o page_alloc.o \
cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o stacktrace.o ffa.o cache.o setup.o mm.o mem_protect.o sys_regs.o pkvm.o stacktrace.o ffa.o

View File

@ -173,9 +173,8 @@ static void __pmu_switch_to_host(struct kvm_vcpu *vcpu)
static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code) static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code)
{ {
/* /*
* Make sure we handle the exit for workarounds and ptrauth * Make sure we handle the exit for workarounds before the pKVM
* before the pKVM handling, as the latter could decide to * handling, as the latter could decide to UNDEF.
* UNDEF.
*/ */
return (kvm_hyp_handle_sysreg(vcpu, exit_code) || return (kvm_hyp_handle_sysreg(vcpu, exit_code) ||
kvm_handle_pvm_sysreg(vcpu, exit_code)); kvm_handle_pvm_sysreg(vcpu, exit_code));

View File

@ -6,6 +6,8 @@
asflags-y := -D__KVM_VHE_HYPERVISOR__ asflags-y := -D__KVM_VHE_HYPERVISOR__
ccflags-y := -D__KVM_VHE_HYPERVISOR__ ccflags-y := -D__KVM_VHE_HYPERVISOR__
CFLAGS_switch.o += -Wno-override-init
obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o obj-y := timer-sr.o sysreg-sr.o debug-sr.o switch.o tlb.o
obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \ obj-y += ../vgic-v3-sr.o ../aarch32.o ../vgic-v2-cpuif-proxy.o ../entry.o \
../fpsimd.o ../hyp-entry.o ../exception.o ../fpsimd.o ../hyp-entry.o ../exception.o

View File

@ -786,7 +786,7 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm)
if (!WARN_ON(atomic_read(&mmu->refcnt))) if (!WARN_ON(atomic_read(&mmu->refcnt)))
kvm_free_stage2_pgd(mmu); kvm_free_stage2_pgd(mmu);
} }
kfree(kvm->arch.nested_mmus); kvfree(kvm->arch.nested_mmus);
kvm->arch.nested_mmus = NULL; kvm->arch.nested_mmus = NULL;
kvm->arch.nested_mmus_size = 0; kvm->arch.nested_mmus_size = 0;
kvm_uninit_stage2_mmu(kvm); kvm_uninit_stage2_mmu(kvm);

View File

@ -45,7 +45,8 @@ static void iter_next(struct kvm *kvm, struct vgic_state_iter *iter)
* Let the xarray drive the iterator after the last SPI, as the iterator * Let the xarray drive the iterator after the last SPI, as the iterator
* has exhausted the sequentially-allocated INTID space. * has exhausted the sequentially-allocated INTID space.
*/ */
if (iter->intid >= (iter->nr_spis + VGIC_NR_PRIVATE_IRQS - 1)) { if (iter->intid >= (iter->nr_spis + VGIC_NR_PRIVATE_IRQS - 1) &&
iter->nr_lpis) {
if (iter->lpi_idx < iter->nr_lpis) if (iter->lpi_idx < iter->nr_lpis)
xa_find_after(&dist->lpi_xa, &iter->intid, xa_find_after(&dist->lpi_xa, &iter->intid,
VGIC_LPI_MAX_INTID, VGIC_LPI_MAX_INTID,
@ -112,7 +113,7 @@ static bool end_of_vgic(struct vgic_state_iter *iter)
return iter->dist_id > 0 && return iter->dist_id > 0 &&
iter->vcpu_id == iter->nr_cpus && iter->vcpu_id == iter->nr_cpus &&
iter->intid >= (iter->nr_spis + VGIC_NR_PRIVATE_IRQS) && iter->intid >= (iter->nr_spis + VGIC_NR_PRIVATE_IRQS) &&
iter->lpi_idx > iter->nr_lpis; (!iter->nr_lpis || iter->lpi_idx > iter->nr_lpis);
} }
static void *vgic_debug_start(struct seq_file *s, loff_t *pos) static void *vgic_debug_start(struct seq_file *s, loff_t *pos)

View File

@ -438,14 +438,13 @@ void kvm_vgic_destroy(struct kvm *kvm)
unsigned long i; unsigned long i;
mutex_lock(&kvm->slots_lock); mutex_lock(&kvm->slots_lock);
mutex_lock(&kvm->arch.config_lock);
vgic_debug_destroy(kvm); vgic_debug_destroy(kvm);
kvm_for_each_vcpu(i, vcpu, kvm) kvm_for_each_vcpu(i, vcpu, kvm)
__kvm_vgic_vcpu_destroy(vcpu); __kvm_vgic_vcpu_destroy(vcpu);
mutex_lock(&kvm->arch.config_lock);
kvm_vgic_dist_destroy(kvm); kvm_vgic_dist_destroy(kvm);
mutex_unlock(&kvm->arch.config_lock); mutex_unlock(&kvm->arch.config_lock);

View File

@ -9,7 +9,7 @@
#include <kvm/arm_vgic.h> #include <kvm/arm_vgic.h>
#include "vgic.h" #include "vgic.h"
/** /*
* vgic_irqfd_set_irq: inject the IRQ corresponding to the * vgic_irqfd_set_irq: inject the IRQ corresponding to the
* irqchip routing entry * irqchip routing entry
* *
@ -75,7 +75,8 @@ static void kvm_populate_msi(struct kvm_kernel_irq_routing_entry *e,
msi->flags = e->msi.flags; msi->flags = e->msi.flags;
msi->devid = e->msi.devid; msi->devid = e->msi.devid;
} }
/**
/*
* kvm_set_msi: inject the MSI corresponding to the * kvm_set_msi: inject the MSI corresponding to the
* MSI routing entry * MSI routing entry
* *
@ -98,7 +99,7 @@ int kvm_set_msi(struct kvm_kernel_irq_routing_entry *e,
return vgic_its_inject_msi(kvm, &msi); return vgic_its_inject_msi(kvm, &msi);
} }
/** /*
* kvm_arch_set_irq_inatomic: fast-path for irqfd injection * kvm_arch_set_irq_inatomic: fast-path for irqfd injection
*/ */
int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e, int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,

View File

@ -2040,6 +2040,7 @@ typedef int (*entry_fn_t)(struct vgic_its *its, u32 id, void *entry,
* @start_id: the ID of the first entry in the table * @start_id: the ID of the first entry in the table
* (non zero for 2d level tables) * (non zero for 2d level tables)
* @fn: function to apply on each entry * @fn: function to apply on each entry
* @opaque: pointer to opaque data
* *
* Return: < 0 on error, 0 if last element was identified, 1 otherwise * Return: < 0 on error, 0 if last element was identified, 1 otherwise
* (the last element may not be found on second level tables) * (the last element may not be found on second level tables)
@ -2079,7 +2080,7 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
return 1; return 1;
} }
/** /*
* vgic_its_save_ite - Save an interrupt translation entry at @gpa * vgic_its_save_ite - Save an interrupt translation entry at @gpa
*/ */
static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev, static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,
@ -2099,6 +2100,8 @@ static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,
/** /**
* vgic_its_restore_ite - restore an interrupt translation entry * vgic_its_restore_ite - restore an interrupt translation entry
*
* @its: its handle
* @event_id: id used for indexing * @event_id: id used for indexing
* @ptr: pointer to the ITE entry * @ptr: pointer to the ITE entry
* @opaque: pointer to the its_device * @opaque: pointer to the its_device
@ -2231,6 +2234,7 @@ static int vgic_its_restore_itt(struct vgic_its *its, struct its_device *dev)
* @its: ITS handle * @its: ITS handle
* @dev: ITS device * @dev: ITS device
* @ptr: GPA * @ptr: GPA
* @dte_esz: device table entry size
*/ */
static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev, static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev,
gpa_t ptr, int dte_esz) gpa_t ptr, int dte_esz)
@ -2313,7 +2317,7 @@ static int vgic_its_device_cmp(void *priv, const struct list_head *a,
return 1; return 1;
} }
/** /*
* vgic_its_save_device_tables - Save the device table and all ITT * vgic_its_save_device_tables - Save the device table and all ITT
* into guest RAM * into guest RAM
* *
@ -2386,7 +2390,7 @@ static int handle_l1_dte(struct vgic_its *its, u32 id, void *addr,
return ret; return ret;
} }
/** /*
* vgic_its_restore_device_tables - Restore the device table and all ITT * vgic_its_restore_device_tables - Restore the device table and all ITT
* from guest RAM to internal data structs * from guest RAM to internal data structs
*/ */
@ -2478,7 +2482,7 @@ static int vgic_its_restore_cte(struct vgic_its *its, gpa_t gpa, int esz)
return 1; return 1;
} }
/** /*
* vgic_its_save_collection_table - Save the collection table into * vgic_its_save_collection_table - Save the collection table into
* guest RAM * guest RAM
*/ */
@ -2518,7 +2522,7 @@ static int vgic_its_save_collection_table(struct vgic_its *its)
return ret; return ret;
} }
/** /*
* vgic_its_restore_collection_table - reads the collection table * vgic_its_restore_collection_table - reads the collection table
* in guest memory and restores the ITS internal state. Requires the * in guest memory and restores the ITS internal state. Requires the
* BASER registers to be restored before. * BASER registers to be restored before.
@ -2556,7 +2560,7 @@ static int vgic_its_restore_collection_table(struct vgic_its *its)
return ret; return ret;
} }
/** /*
* vgic_its_save_tables_v0 - Save the ITS tables into guest ARM * vgic_its_save_tables_v0 - Save the ITS tables into guest ARM
* according to v0 ABI * according to v0 ABI
*/ */
@ -2571,7 +2575,7 @@ static int vgic_its_save_tables_v0(struct vgic_its *its)
return vgic_its_save_collection_table(its); return vgic_its_save_collection_table(its);
} }
/** /*
* vgic_its_restore_tables_v0 - Restore the ITS tables from guest RAM * vgic_its_restore_tables_v0 - Restore the ITS tables from guest RAM
* to internal data structs according to V0 ABI * to internal data structs according to V0 ABI
* *

View File

@ -370,7 +370,7 @@ static void map_all_vpes(struct kvm *kvm)
dist->its_vm.vpes[i]->irq)); dist->its_vm.vpes[i]->irq));
} }
/** /*
* vgic_v3_save_pending_tables - Save the pending tables into guest RAM * vgic_v3_save_pending_tables - Save the pending tables into guest RAM
* kvm lock and all vcpu lock must be held * kvm lock and all vcpu lock must be held
*/ */

View File

@ -313,7 +313,7 @@ static bool vgic_validate_injection(struct vgic_irq *irq, bool level, void *owne
* with all locks dropped. * with all locks dropped.
*/ */
bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq, bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq,
unsigned long flags) unsigned long flags) __releases(&irq->irq_lock)
{ {
struct kvm_vcpu *vcpu; struct kvm_vcpu *vcpu;

View File

@ -186,7 +186,7 @@ bool vgic_get_phys_line_level(struct vgic_irq *irq);
void vgic_irq_set_phys_pending(struct vgic_irq *irq, bool pending); void vgic_irq_set_phys_pending(struct vgic_irq *irq, bool pending);
void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active); void vgic_irq_set_phys_active(struct vgic_irq *irq, bool active);
bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq, bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq,
unsigned long flags); unsigned long flags) __releases(&irq->irq_lock);
void vgic_kick_vcpus(struct kvm *kvm); void vgic_kick_vcpus(struct kvm *kvm);
void vgic_irq_handle_resampling(struct vgic_irq *irq, void vgic_irq_handle_resampling(struct vgic_irq *irq,
bool lr_deactivated, bool lr_pending); bool lr_deactivated, bool lr_pending);

View File

@ -111,7 +111,7 @@ void gio_device_unregister(struct gio_device *giodev)
} }
EXPORT_SYMBOL_GPL(gio_device_unregister); EXPORT_SYMBOL_GPL(gio_device_unregister);
static int gio_bus_match(struct device *dev, struct device_driver *drv) static int gio_bus_match(struct device *dev, const struct device_driver *drv)
{ {
struct gio_device *gio_dev = to_gio_device(dev); struct gio_device *gio_dev = to_gio_device(dev);
struct gio_driver *gio_drv = to_gio_driver(drv); struct gio_driver *gio_drv = to_gio_driver(drv);

View File

@ -145,6 +145,7 @@ static inline int cpu_to_coregroup_id(int cpu)
#ifdef CONFIG_HOTPLUG_SMT #ifdef CONFIG_HOTPLUG_SMT
#include <linux/cpu_smt.h> #include <linux/cpu_smt.h>
#include <linux/cpumask.h>
#include <asm/cputhreads.h> #include <asm/cputhreads.h>
static inline bool topology_is_primary_thread(unsigned int cpu) static inline bool topology_is_primary_thread(unsigned int cpu)
@ -156,6 +157,18 @@ static inline bool topology_smt_thread_allowed(unsigned int cpu)
{ {
return cpu_thread_in_core(cpu) < cpu_smt_num_threads; return cpu_thread_in_core(cpu) < cpu_smt_num_threads;
} }
#define topology_is_core_online topology_is_core_online
static inline bool topology_is_core_online(unsigned int cpu)
{
int i, first_cpu = cpu_first_thread_sibling(cpu);
for (i = first_cpu; i < first_cpu + threads_per_core; ++i) {
if (cpu_online(i))
return true;
}
return false;
}
#endif #endif
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */

View File

@ -959,6 +959,7 @@ void __init setup_arch(char **cmdline_p)
mem_topology_setup(); mem_topology_setup();
/* Set max_mapnr before paging_init() */ /* Set max_mapnr before paging_init() */
set_max_mapnr(max_pfn); set_max_mapnr(max_pfn);
high_memory = (void *)__va(max_low_pfn * PAGE_SIZE);
/* /*
* Release secondary cpus out of their spinloops at 0x60 now that * Release secondary cpus out of their spinloops at 0x60 now that

View File

@ -73,7 +73,7 @@ void setup_kup(void)
#define CTOR(shift) static void ctor_##shift(void *addr) \ #define CTOR(shift) static void ctor_##shift(void *addr) \
{ \ { \
memset(addr, 0, sizeof(void *) << (shift)); \ memset(addr, 0, sizeof(pgd_t) << (shift)); \
} }
CTOR(0); CTOR(1); CTOR(2); CTOR(3); CTOR(4); CTOR(5); CTOR(6); CTOR(7); CTOR(0); CTOR(1); CTOR(2); CTOR(3); CTOR(4); CTOR(5); CTOR(6); CTOR(7);
@ -117,7 +117,7 @@ EXPORT_SYMBOL_GPL(pgtable_cache); /* used by kvm_hv module */
void pgtable_cache_add(unsigned int shift) void pgtable_cache_add(unsigned int shift)
{ {
char *name; char *name;
unsigned long table_size = sizeof(void *) << shift; unsigned long table_size = sizeof(pgd_t) << shift;
unsigned long align = table_size; unsigned long align = table_size;
/* When batching pgtable pointers for RCU freeing, we store /* When batching pgtable pointers for RCU freeing, we store

View File

@ -290,8 +290,6 @@ void __init mem_init(void)
swiotlb_init(ppc_swiotlb_enable, ppc_swiotlb_flags); swiotlb_init(ppc_swiotlb_enable, ppc_swiotlb_flags);
#endif #endif
high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
kasan_late_init(); kasan_late_init();
memblock_free_all(); memblock_free_all();

View File

@ -8,7 +8,7 @@
#include <uapi/asm/hwprobe.h> #include <uapi/asm/hwprobe.h>
#define RISCV_HWPROBE_MAX_KEY 8 #define RISCV_HWPROBE_MAX_KEY 9
static inline bool riscv_hwprobe_key_is_valid(__s64 key) static inline bool riscv_hwprobe_key_is_valid(__s64 key)
{ {

View File

@ -82,6 +82,12 @@ struct riscv_hwprobe {
#define RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE 6 #define RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE 6
#define RISCV_HWPROBE_KEY_HIGHEST_VIRT_ADDRESS 7 #define RISCV_HWPROBE_KEY_HIGHEST_VIRT_ADDRESS 7
#define RISCV_HWPROBE_KEY_TIME_CSR_FREQ 8 #define RISCV_HWPROBE_KEY_TIME_CSR_FREQ 8
#define RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF 9
#define RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN 0
#define RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED 1
#define RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW 2
#define RISCV_HWPROBE_MISALIGNED_SCALAR_FAST 3
#define RISCV_HWPROBE_MISALIGNED_SCALAR_UNSUPPORTED 4
/* Increase RISCV_HWPROBE_MAX_KEY when adding items. */ /* Increase RISCV_HWPROBE_MAX_KEY when adding items. */
/* Flags */ /* Flags */

View File

@ -28,7 +28,7 @@
#include <asm/numa.h> #include <asm/numa.h>
static int acpi_early_node_map[NR_CPUS] __initdata = { NUMA_NO_NODE }; static int acpi_early_node_map[NR_CPUS] __initdata = { [0 ... NR_CPUS - 1] = NUMA_NO_NODE };
int __init acpi_numa_get_nid(unsigned int cpu) int __init acpi_numa_get_nid(unsigned int cpu)
{ {

View File

@ -205,6 +205,8 @@ int patch_text_set_nosync(void *addr, u8 c, size_t len)
int ret; int ret;
ret = patch_insn_set(addr, c, len); ret = patch_insn_set(addr, c, len);
if (!ret)
flush_icache_range((uintptr_t)addr, (uintptr_t)addr + len);
return ret; return ret;
} }
@ -239,6 +241,8 @@ int patch_text_nosync(void *addr, const void *insns, size_t len)
int ret; int ret;
ret = patch_insn_write(addr, insns, len); ret = patch_insn_write(addr, insns, len);
if (!ret)
flush_icache_range((uintptr_t)addr, (uintptr_t)addr + len);
return ret; return ret;
} }

View File

@ -178,13 +178,13 @@ static u64 hwprobe_misaligned(const struct cpumask *cpus)
perf = this_perf; perf = this_perf;
if (perf != this_perf) { if (perf != this_perf) {
perf = RISCV_HWPROBE_MISALIGNED_UNKNOWN; perf = RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN;
break; break;
} }
} }
if (perf == -1ULL) if (perf == -1ULL)
return RISCV_HWPROBE_MISALIGNED_UNKNOWN; return RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN;
return perf; return perf;
} }
@ -192,12 +192,12 @@ static u64 hwprobe_misaligned(const struct cpumask *cpus)
static u64 hwprobe_misaligned(const struct cpumask *cpus) static u64 hwprobe_misaligned(const struct cpumask *cpus)
{ {
if (IS_ENABLED(CONFIG_RISCV_EFFICIENT_UNALIGNED_ACCESS)) if (IS_ENABLED(CONFIG_RISCV_EFFICIENT_UNALIGNED_ACCESS))
return RISCV_HWPROBE_MISALIGNED_FAST; return RISCV_HWPROBE_MISALIGNED_SCALAR_FAST;
if (IS_ENABLED(CONFIG_RISCV_EMULATED_UNALIGNED_ACCESS) && unaligned_ctl_available()) if (IS_ENABLED(CONFIG_RISCV_EMULATED_UNALIGNED_ACCESS) && unaligned_ctl_available())
return RISCV_HWPROBE_MISALIGNED_EMULATED; return RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED;
return RISCV_HWPROBE_MISALIGNED_SLOW; return RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW;
} }
#endif #endif
@ -225,6 +225,7 @@ static void hwprobe_one_pair(struct riscv_hwprobe *pair,
break; break;
case RISCV_HWPROBE_KEY_CPUPERF_0: case RISCV_HWPROBE_KEY_CPUPERF_0:
case RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF:
pair->value = hwprobe_misaligned(cpus); pair->value = hwprobe_misaligned(cpus);
break; break;

View File

@ -319,6 +319,7 @@ void do_trap_ecall_u(struct pt_regs *regs)
regs->epc += 4; regs->epc += 4;
regs->orig_a0 = regs->a0; regs->orig_a0 = regs->a0;
regs->a0 = -ENOSYS;
riscv_v_vstate_discard(regs); riscv_v_vstate_discard(regs);
@ -328,8 +329,7 @@ void do_trap_ecall_u(struct pt_regs *regs)
if (syscall >= 0 && syscall < NR_syscalls) if (syscall >= 0 && syscall < NR_syscalls)
syscall_handler(regs, syscall); syscall_handler(regs, syscall);
else if (syscall != -1)
regs->a0 = -ENOSYS;
/* /*
* Ultimately, this value will get limited by KSTACK_OFFSET_MAX(), * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(),
* so the maximum stack offset is 1k bytes (10 bits). * so the maximum stack offset is 1k bytes (10 bits).

View File

@ -338,7 +338,7 @@ int handle_misaligned_load(struct pt_regs *regs)
perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr); perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr);
#ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS #ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS
*this_cpu_ptr(&misaligned_access_speed) = RISCV_HWPROBE_MISALIGNED_EMULATED; *this_cpu_ptr(&misaligned_access_speed) = RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED;
#endif #endif
if (!unaligned_enabled) if (!unaligned_enabled)
@ -532,13 +532,13 @@ static bool check_unaligned_access_emulated(int cpu)
unsigned long tmp_var, tmp_val; unsigned long tmp_var, tmp_val;
bool misaligned_emu_detected; bool misaligned_emu_detected;
*mas_ptr = RISCV_HWPROBE_MISALIGNED_UNKNOWN; *mas_ptr = RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN;
__asm__ __volatile__ ( __asm__ __volatile__ (
" "REG_L" %[tmp], 1(%[ptr])\n" " "REG_L" %[tmp], 1(%[ptr])\n"
: [tmp] "=r" (tmp_val) : [ptr] "r" (&tmp_var) : "memory"); : [tmp] "=r" (tmp_val) : [ptr] "r" (&tmp_var) : "memory");
misaligned_emu_detected = (*mas_ptr == RISCV_HWPROBE_MISALIGNED_EMULATED); misaligned_emu_detected = (*mas_ptr == RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED);
/* /*
* If unaligned_ctl is already set, this means that we detected that all * If unaligned_ctl is already set, this means that we detected that all
* CPUS uses emulated misaligned access at boot time. If that changed * CPUS uses emulated misaligned access at boot time. If that changed

View File

@ -34,9 +34,9 @@ static int check_unaligned_access(void *param)
struct page *page = param; struct page *page = param;
void *dst; void *dst;
void *src; void *src;
long speed = RISCV_HWPROBE_MISALIGNED_SLOW; long speed = RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW;
if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN) if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN)
return 0; return 0;
/* Make an unaligned destination buffer. */ /* Make an unaligned destination buffer. */
@ -95,14 +95,14 @@ static int check_unaligned_access(void *param)
} }
if (word_cycles < byte_cycles) if (word_cycles < byte_cycles)
speed = RISCV_HWPROBE_MISALIGNED_FAST; speed = RISCV_HWPROBE_MISALIGNED_SCALAR_FAST;
ratio = div_u64((byte_cycles * 100), word_cycles); ratio = div_u64((byte_cycles * 100), word_cycles);
pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.%02d, unaligned accesses are %s\n", pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.%02d, unaligned accesses are %s\n",
cpu, cpu,
ratio / 100, ratio / 100,
ratio % 100, ratio % 100,
(speed == RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow"); (speed == RISCV_HWPROBE_MISALIGNED_SCALAR_FAST) ? "fast" : "slow");
per_cpu(misaligned_access_speed, cpu) = speed; per_cpu(misaligned_access_speed, cpu) = speed;
@ -110,7 +110,7 @@ static int check_unaligned_access(void *param)
* Set the value of fast_misaligned_access of a CPU. These operations * Set the value of fast_misaligned_access of a CPU. These operations
* are atomic to avoid race conditions. * are atomic to avoid race conditions.
*/ */
if (speed == RISCV_HWPROBE_MISALIGNED_FAST) if (speed == RISCV_HWPROBE_MISALIGNED_SCALAR_FAST)
cpumask_set_cpu(cpu, &fast_misaligned_access); cpumask_set_cpu(cpu, &fast_misaligned_access);
else else
cpumask_clear_cpu(cpu, &fast_misaligned_access); cpumask_clear_cpu(cpu, &fast_misaligned_access);
@ -188,7 +188,7 @@ static int riscv_online_cpu(unsigned int cpu)
static struct page *buf; static struct page *buf;
/* We are already set since the last check */ /* We are already set since the last check */
if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN) if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN)
goto exit; goto exit;
buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER); buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);

View File

@ -38,7 +38,7 @@ bool __riscv_isa_vendor_extension_available(int cpu, unsigned long vendor, unsig
#ifdef CONFIG_RISCV_ISA_VENDOR_EXT_ANDES #ifdef CONFIG_RISCV_ISA_VENDOR_EXT_ANDES
case ANDES_VENDOR_ID: case ANDES_VENDOR_ID:
bmap = &riscv_isa_vendor_ext_list_andes.all_harts_isa_bitmap; bmap = &riscv_isa_vendor_ext_list_andes.all_harts_isa_bitmap;
cpu_bmap = &riscv_isa_vendor_ext_list_andes.per_hart_isa_bitmap[cpu]; cpu_bmap = riscv_isa_vendor_ext_list_andes.per_hart_isa_bitmap;
break; break;
#endif #endif
default: default:

View File

@ -927,7 +927,7 @@ static void __init create_kernel_page_table(pgd_t *pgdir,
PMD_SIZE, PAGE_KERNEL_EXEC); PMD_SIZE, PAGE_KERNEL_EXEC);
/* Map the data in RAM */ /* Map the data in RAM */
end_va = kernel_map.virt_addr + XIP_OFFSET + kernel_map.size; end_va = kernel_map.virt_addr + kernel_map.size;
for (va = kernel_map.virt_addr + XIP_OFFSET; va < end_va; va += PMD_SIZE) for (va = kernel_map.virt_addr + XIP_OFFSET; va < end_va; va += PMD_SIZE)
create_pgd_mapping(pgdir, va, create_pgd_mapping(pgdir, va,
kernel_map.phys_addr + (va - (kernel_map.virt_addr + XIP_OFFSET)), kernel_map.phys_addr + (va - (kernel_map.virt_addr + XIP_OFFSET)),
@ -1096,7 +1096,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
phys_ram_base = CONFIG_PHYS_RAM_BASE; phys_ram_base = CONFIG_PHYS_RAM_BASE;
kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE; kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE;
kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_sdata); kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_start);
kernel_map.va_kernel_xip_pa_offset = kernel_map.virt_addr - kernel_map.xiprom; kernel_map.va_kernel_xip_pa_offset = kernel_map.virt_addr - kernel_map.xiprom;
#else #else

View File

@ -441,7 +441,10 @@ static inline int share(unsigned long addr, u16 cmd)
if (!uv_call(0, (u64)&uvcb)) if (!uv_call(0, (u64)&uvcb))
return 0; return 0;
return -EINVAL; pr_err("%s UVC failed (rc: 0x%x, rrc: 0x%x), possible hypervisor bug.\n",
uvcb.header.cmd == UVC_CMD_SET_SHARED_ACCESS ? "Share" : "Unshare",
uvcb.header.rc, uvcb.header.rrc);
panic("System security cannot be guaranteed unless the system panics now.\n");
} }
/* /*

View File

@ -267,7 +267,12 @@ static inline unsigned long kvm_s390_get_gfn_end(struct kvm_memslots *slots)
static inline u32 kvm_s390_get_gisa_desc(struct kvm *kvm) static inline u32 kvm_s390_get_gisa_desc(struct kvm *kvm)
{ {
u32 gd = virt_to_phys(kvm->arch.gisa_int.origin); u32 gd;
if (!kvm->arch.gisa_int.origin)
return 0;
gd = virt_to_phys(kvm->arch.gisa_int.origin);
if (gd && sclp.has_gisaf) if (gd && sclp.has_gisaf)
gd |= GISA_FORMAT1; gd |= GISA_FORMAT1;

View File

@ -2192,6 +2192,8 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
#define kvm_arch_has_private_mem(kvm) false #define kvm_arch_has_private_mem(kvm) false
#endif #endif
#define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
static inline u16 kvm_read_ldt(void) static inline u16 kvm_read_ldt(void)
{ {
u16 ldt; u16 ldt;

View File

@ -286,7 +286,6 @@ static inline int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
return HV_STATUS_ACCESS_DENIED; return HV_STATUS_ACCESS_DENIED;
} }
static inline void kvm_hv_vcpu_purge_flush_tlb(struct kvm_vcpu *vcpu) {} static inline void kvm_hv_vcpu_purge_flush_tlb(struct kvm_vcpu *vcpu) {}
static inline void kvm_hv_free_pa_page(struct kvm *kvm) {}
static inline bool kvm_hv_synic_has_vector(struct kvm_vcpu *vcpu, int vector) static inline bool kvm_hv_synic_has_vector(struct kvm_vcpu *vcpu, int vector)
{ {
return false; return false;

View File

@ -351,10 +351,8 @@ static void kvm_recalculate_logical_map(struct kvm_apic_map *new,
* reversing the LDR calculation to get cluster of APICs, i.e. no * reversing the LDR calculation to get cluster of APICs, i.e. no
* additional work is required. * additional work is required.
*/ */
if (apic_x2apic_mode(apic)) { if (apic_x2apic_mode(apic))
WARN_ON_ONCE(ldr != kvm_apic_calc_x2apic_ldr(kvm_x2apic_id(apic)));
return; return;
}
if (WARN_ON_ONCE(!kvm_apic_map_get_logical_dest(new, ldr, if (WARN_ON_ONCE(!kvm_apic_map_get_logical_dest(new, ldr,
&cluster, &mask))) { &cluster, &mask))) {
@ -2966,18 +2964,28 @@ static int kvm_apic_state_fixup(struct kvm_vcpu *vcpu,
struct kvm_lapic_state *s, bool set) struct kvm_lapic_state *s, bool set)
{ {
if (apic_x2apic_mode(vcpu->arch.apic)) { if (apic_x2apic_mode(vcpu->arch.apic)) {
u32 x2apic_id = kvm_x2apic_id(vcpu->arch.apic);
u32 *id = (u32 *)(s->regs + APIC_ID); u32 *id = (u32 *)(s->regs + APIC_ID);
u32 *ldr = (u32 *)(s->regs + APIC_LDR); u32 *ldr = (u32 *)(s->regs + APIC_LDR);
u64 icr; u64 icr;
if (vcpu->kvm->arch.x2apic_format) { if (vcpu->kvm->arch.x2apic_format) {
if (*id != vcpu->vcpu_id) if (*id != x2apic_id)
return -EINVAL; return -EINVAL;
} else { } else {
/*
* Ignore the userspace value when setting APIC state.
* KVM's model is that the x2APIC ID is readonly, e.g.
* KVM only supports delivering interrupts to KVM's
* version of the x2APIC ID. However, for backwards
* compatibility, don't reject attempts to set a
* mismatched ID for userspace that hasn't opted into
* x2apic_format.
*/
if (set) if (set)
*id >>= 24; *id = x2apic_id;
else else
*id <<= 24; *id = x2apic_id << 24;
} }
/* /*
@ -2986,7 +2994,7 @@ static int kvm_apic_state_fixup(struct kvm_vcpu *vcpu,
* split to ICR+ICR2 in userspace for backwards compatibility. * split to ICR+ICR2 in userspace for backwards compatibility.
*/ */
if (set) { if (set) {
*ldr = kvm_apic_calc_x2apic_ldr(*id); *ldr = kvm_apic_calc_x2apic_ldr(x2apic_id);
icr = __kvm_lapic_get_reg(s->regs, APIC_ICR) | icr = __kvm_lapic_get_reg(s->regs, APIC_ICR) |
(u64)__kvm_lapic_get_reg(s->regs, APIC_ICR2) << 32; (u64)__kvm_lapic_get_reg(s->regs, APIC_ICR2) << 32;

View File

@ -2276,7 +2276,7 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn_start, kvm_pfn_t pf
for (gfn = gfn_start, i = 0; gfn < gfn_start + npages; gfn++, i++) { for (gfn = gfn_start, i = 0; gfn < gfn_start + npages; gfn++, i++) {
struct sev_data_snp_launch_update fw_args = {0}; struct sev_data_snp_launch_update fw_args = {0};
bool assigned; bool assigned = false;
int level; int level;
ret = snp_lookup_rmpentry((u64)pfn + i, &assigned, &level); ret = snp_lookup_rmpentry((u64)pfn + i, &assigned, &level);
@ -2290,9 +2290,10 @@ static int sev_gmem_post_populate(struct kvm *kvm, gfn_t gfn_start, kvm_pfn_t pf
if (src) { if (src) {
void *vaddr = kmap_local_pfn(pfn + i); void *vaddr = kmap_local_pfn(pfn + i);
ret = copy_from_user(vaddr, src + i * PAGE_SIZE, PAGE_SIZE); if (copy_from_user(vaddr, src + i * PAGE_SIZE, PAGE_SIZE)) {
if (ret) ret = -EFAULT;
goto err; goto err;
}
kunmap_local(vaddr); kunmap_local(vaddr);
} }

View File

@ -427,8 +427,7 @@ static void kvm_user_return_msr_cpu_online(void)
int kvm_set_user_return_msr(unsigned slot, u64 value, u64 mask) int kvm_set_user_return_msr(unsigned slot, u64 value, u64 mask)
{ {
unsigned int cpu = smp_processor_id(); struct kvm_user_return_msrs *msrs = this_cpu_ptr(user_return_msrs);
struct kvm_user_return_msrs *msrs = per_cpu_ptr(user_return_msrs, cpu);
int err; int err;
value = (value & mask) | (msrs->values[slot].host & ~mask); value = (value & mask) | (msrs->values[slot].host & ~mask);
@ -450,8 +449,7 @@ EXPORT_SYMBOL_GPL(kvm_set_user_return_msr);
static void drop_user_return_notifiers(void) static void drop_user_return_notifiers(void)
{ {
unsigned int cpu = smp_processor_id(); struct kvm_user_return_msrs *msrs = this_cpu_ptr(user_return_msrs);
struct kvm_user_return_msrs *msrs = per_cpu_ptr(user_return_msrs, cpu);
if (msrs->registered) if (msrs->registered)
kvm_on_user_return(&msrs->urn); kvm_on_user_return(&msrs->urn);

View File

@ -38,6 +38,7 @@ static void blk_mq_update_wake_batch(struct blk_mq_tags *tags,
void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx) void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
{ {
unsigned int users; unsigned int users;
unsigned long flags;
struct blk_mq_tags *tags = hctx->tags; struct blk_mq_tags *tags = hctx->tags;
/* /*
@ -56,11 +57,11 @@ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
return; return;
} }
spin_lock_irq(&tags->lock); spin_lock_irqsave(&tags->lock, flags);
users = tags->active_queues + 1; users = tags->active_queues + 1;
WRITE_ONCE(tags->active_queues, users); WRITE_ONCE(tags->active_queues, users);
blk_mq_update_wake_batch(tags, users); blk_mq_update_wake_batch(tags, users);
spin_unlock_irq(&tags->lock); spin_unlock_irqrestore(&tags->lock, flags);
} }
/* /*

View File

@ -188,13 +188,9 @@ acpi_ev_detach_region(union acpi_operand_object *region_obj,
u8 acpi_ns_is_locked); u8 acpi_ns_is_locked);
void void
acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, u32 max_depth,
acpi_adr_space_type space_id, u32 function); acpi_adr_space_type space_id, u32 function);
void
acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *node,
acpi_adr_space_type space_id);
acpi_status acpi_status
acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function); acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function);

View File

@ -20,6 +20,10 @@ extern u8 acpi_gbl_default_address_spaces[];
/* Local prototypes */ /* Local prototypes */
static void
acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *device_node,
acpi_adr_space_type space_id);
static acpi_status static acpi_status
acpi_ev_reg_run(acpi_handle obj_handle, acpi_ev_reg_run(acpi_handle obj_handle,
u32 level, void *context, void **return_value); u32 level, void *context, void **return_value);
@ -61,6 +65,7 @@ acpi_status acpi_ev_initialize_op_regions(void)
acpi_gbl_default_address_spaces acpi_gbl_default_address_spaces
[i])) { [i])) {
acpi_ev_execute_reg_methods(acpi_gbl_root_node, acpi_ev_execute_reg_methods(acpi_gbl_root_node,
ACPI_UINT32_MAX,
acpi_gbl_default_address_spaces acpi_gbl_default_address_spaces
[i], ACPI_REG_CONNECT); [i], ACPI_REG_CONNECT);
} }
@ -668,6 +673,7 @@ acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function)
* FUNCTION: acpi_ev_execute_reg_methods * FUNCTION: acpi_ev_execute_reg_methods
* *
* PARAMETERS: node - Namespace node for the device * PARAMETERS: node - Namespace node for the device
* max_depth - Depth to which search for _REG
* space_id - The address space ID * space_id - The address space ID
* function - Passed to _REG: On (1) or Off (0) * function - Passed to _REG: On (1) or Off (0)
* *
@ -679,7 +685,7 @@ acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function)
******************************************************************************/ ******************************************************************************/
void void
acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, u32 max_depth,
acpi_adr_space_type space_id, u32 function) acpi_adr_space_type space_id, u32 function)
{ {
struct acpi_reg_walk_info info; struct acpi_reg_walk_info info;
@ -713,7 +719,7 @@ acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
* regions and _REG methods. (i.e. handlers must be installed for all * regions and _REG methods. (i.e. handlers must be installed for all
* regions of this Space ID before we can run any _REG methods) * regions of this Space ID before we can run any _REG methods)
*/ */
(void)acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, ACPI_UINT32_MAX, (void)acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, max_depth,
ACPI_NS_WALK_UNLOCK, acpi_ev_reg_run, NULL, ACPI_NS_WALK_UNLOCK, acpi_ev_reg_run, NULL,
&info, NULL); &info, NULL);
@ -814,7 +820,7 @@ acpi_ev_reg_run(acpi_handle obj_handle,
* *
******************************************************************************/ ******************************************************************************/
void static void
acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *device_node, acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *device_node,
acpi_adr_space_type space_id) acpi_adr_space_type space_id)
{ {

View File

@ -85,7 +85,8 @@ acpi_install_address_space_handler_internal(acpi_handle device,
/* Run all _REG methods for this address space */ /* Run all _REG methods for this address space */
if (run_reg) { if (run_reg) {
acpi_ev_execute_reg_methods(node, space_id, ACPI_REG_CONNECT); acpi_ev_execute_reg_methods(node, ACPI_UINT32_MAX, space_id,
ACPI_REG_CONNECT);
} }
unlock_and_exit: unlock_and_exit:
@ -263,6 +264,7 @@ ACPI_EXPORT_SYMBOL(acpi_remove_address_space_handler)
* FUNCTION: acpi_execute_reg_methods * FUNCTION: acpi_execute_reg_methods
* *
* PARAMETERS: device - Handle for the device * PARAMETERS: device - Handle for the device
* max_depth - Depth to which search for _REG
* space_id - The address space ID * space_id - The address space ID
* *
* RETURN: Status * RETURN: Status
@ -271,7 +273,8 @@ ACPI_EXPORT_SYMBOL(acpi_remove_address_space_handler)
* *
******************************************************************************/ ******************************************************************************/
acpi_status acpi_status
acpi_execute_reg_methods(acpi_handle device, acpi_adr_space_type space_id) acpi_execute_reg_methods(acpi_handle device, u32 max_depth,
acpi_adr_space_type space_id)
{ {
struct acpi_namespace_node *node; struct acpi_namespace_node *node;
acpi_status status; acpi_status status;
@ -296,7 +299,8 @@ acpi_execute_reg_methods(acpi_handle device, acpi_adr_space_type space_id)
/* Run all _REG methods for this address space */ /* Run all _REG methods for this address space */
acpi_ev_execute_reg_methods(node, space_id, ACPI_REG_CONNECT); acpi_ev_execute_reg_methods(node, max_depth, space_id,
ACPI_REG_CONNECT);
} else { } else {
status = AE_BAD_PARAMETER; status = AE_BAD_PARAMETER;
} }
@ -306,57 +310,3 @@ acpi_execute_reg_methods(acpi_handle device, acpi_adr_space_type space_id)
} }
ACPI_EXPORT_SYMBOL(acpi_execute_reg_methods) ACPI_EXPORT_SYMBOL(acpi_execute_reg_methods)
/*******************************************************************************
*
* FUNCTION: acpi_execute_orphan_reg_method
*
* PARAMETERS: device - Handle for the device
* space_id - The address space ID
*
* RETURN: Status
*
* DESCRIPTION: Execute an "orphan" _REG method that appears under an ACPI
* device. This is a _REG method that has no corresponding region
* within the device's scope.
*
******************************************************************************/
acpi_status
acpi_execute_orphan_reg_method(acpi_handle device, acpi_adr_space_type space_id)
{
struct acpi_namespace_node *node;
acpi_status status;
ACPI_FUNCTION_TRACE(acpi_execute_orphan_reg_method);
/* Parameter validation */
if (!device) {
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
/* Convert and validate the device handle */
node = acpi_ns_validate_handle(device);
if (node) {
/*
* If an "orphan" _REG method is present in the device's scope
* for the given address space ID, run it.
*/
acpi_ev_execute_orphan_reg_method(node, space_id);
} else {
status = AE_BAD_PARAMETER;
}
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
return_ACPI_STATUS(status);
}
ACPI_EXPORT_SYMBOL(acpi_execute_orphan_reg_method)

View File

@ -1487,12 +1487,13 @@ static bool install_gpio_irq_event_handler(struct acpi_ec *ec)
static int ec_install_handlers(struct acpi_ec *ec, struct acpi_device *device, static int ec_install_handlers(struct acpi_ec *ec, struct acpi_device *device,
bool call_reg) bool call_reg)
{ {
acpi_handle scope_handle = ec == first_ec ? ACPI_ROOT_OBJECT : ec->handle;
acpi_status status; acpi_status status;
acpi_ec_start(ec, false); acpi_ec_start(ec, false);
if (!test_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags)) { if (!test_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags)) {
acpi_handle scope_handle = ec == first_ec ? ACPI_ROOT_OBJECT : ec->handle;
acpi_ec_enter_noirq(ec); acpi_ec_enter_noirq(ec);
status = acpi_install_address_space_handler_no_reg(scope_handle, status = acpi_install_address_space_handler_no_reg(scope_handle,
ACPI_ADR_SPACE_EC, ACPI_ADR_SPACE_EC,
@ -1506,10 +1507,7 @@ static int ec_install_handlers(struct acpi_ec *ec, struct acpi_device *device,
} }
if (call_reg && !test_bit(EC_FLAGS_EC_REG_CALLED, &ec->flags)) { if (call_reg && !test_bit(EC_FLAGS_EC_REG_CALLED, &ec->flags)) {
acpi_execute_reg_methods(scope_handle, ACPI_ADR_SPACE_EC); acpi_execute_reg_methods(ec->handle, ACPI_UINT32_MAX, ACPI_ADR_SPACE_EC);
if (scope_handle != ec->handle)
acpi_execute_orphan_reg_method(ec->handle, ACPI_ADR_SPACE_EC);
set_bit(EC_FLAGS_EC_REG_CALLED, &ec->flags); set_bit(EC_FLAGS_EC_REG_CALLED, &ec->flags);
} }
@ -1724,6 +1722,12 @@ static void acpi_ec_remove(struct acpi_device *device)
} }
} }
void acpi_ec_register_opregions(struct acpi_device *adev)
{
if (first_ec && first_ec->handle != adev->handle)
acpi_execute_reg_methods(adev->handle, 1, ACPI_ADR_SPACE_EC);
}
static acpi_status static acpi_status
ec_parse_io_ports(struct acpi_resource *resource, void *context) ec_parse_io_ports(struct acpi_resource *resource, void *context)
{ {

View File

@ -223,6 +223,7 @@ int acpi_ec_add_query_handler(struct acpi_ec *ec, u8 query_bit,
acpi_handle handle, acpi_ec_query_func func, acpi_handle handle, acpi_ec_query_func func,
void *data); void *data);
void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit); void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit);
void acpi_ec_register_opregions(struct acpi_device *adev);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
void acpi_ec_flush_work(void); void acpi_ec_flush_work(void);

View File

@ -2273,6 +2273,8 @@ static int acpi_bus_attach(struct acpi_device *device, void *first_pass)
if (device->handler) if (device->handler)
goto ok; goto ok;
acpi_ec_register_opregions(device);
if (!device->flags.initialized) { if (!device->flags.initialized) {
device->flags.power_manageable = device->flags.power_manageable =
device->power.states[ACPI_STATE_D0].flags.valid; device->power.states[ACPI_STATE_D0].flags.valid;

View File

@ -951,8 +951,19 @@ static void ata_gen_passthru_sense(struct ata_queued_cmd *qc)
&sense_key, &asc, &ascq); &sense_key, &asc, &ascq);
ata_scsi_set_sense(qc->dev, cmd, sense_key, asc, ascq); ata_scsi_set_sense(qc->dev, cmd, sense_key, asc, ascq);
} else { } else {
/* ATA PASS-THROUGH INFORMATION AVAILABLE */ /*
ata_scsi_set_sense(qc->dev, cmd, RECOVERED_ERROR, 0, 0x1D); * ATA PASS-THROUGH INFORMATION AVAILABLE
*
* Note: we are supposed to call ata_scsi_set_sense(), which
* respects the D_SENSE bit, instead of unconditionally
* generating the sense data in descriptor format. However,
* because hdparm, hddtemp, and udisks incorrectly assume sense
* data in descriptor format, without even looking at the
* RESPONSE CODE field in the returned sense data (to see which
* format the returned sense data is in), we are stuck with
* being bug compatible with older kernels.
*/
scsi_build_sense(cmd, 1, RECOVERED_ERROR, 0, 0x1D);
} }
} }

View File

@ -1118,8 +1118,8 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
rpp->len += skb->len; rpp->len += skb->len;
if (stat & SAR_RSQE_EPDU) { if (stat & SAR_RSQE_EPDU) {
unsigned int len, truesize;
unsigned char *l1l2; unsigned char *l1l2;
unsigned int len;
l1l2 = (unsigned char *) ((unsigned long) skb->data + skb->len - 6); l1l2 = (unsigned char *) ((unsigned long) skb->data + skb->len - 6);
@ -1189,14 +1189,15 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
ATM_SKB(skb)->vcc = vcc; ATM_SKB(skb)->vcc = vcc;
__net_timestamp(skb); __net_timestamp(skb);
truesize = skb->truesize;
vcc->push(vcc, skb); vcc->push(vcc, skb);
atomic_inc(&vcc->stats->rx); atomic_inc(&vcc->stats->rx);
if (skb->truesize > SAR_FB_SIZE_3) if (truesize > SAR_FB_SIZE_3)
add_rx_skb(card, 3, SAR_FB_SIZE_3, 1); add_rx_skb(card, 3, SAR_FB_SIZE_3, 1);
else if (skb->truesize > SAR_FB_SIZE_2) else if (truesize > SAR_FB_SIZE_2)
add_rx_skb(card, 2, SAR_FB_SIZE_2, 1); add_rx_skb(card, 2, SAR_FB_SIZE_2, 1);
else if (skb->truesize > SAR_FB_SIZE_1) else if (truesize > SAR_FB_SIZE_1)
add_rx_skb(card, 1, SAR_FB_SIZE_1, 1); add_rx_skb(card, 1, SAR_FB_SIZE_1, 1);
else else
add_rx_skb(card, 0, SAR_FB_SIZE_0, 1); add_rx_skb(card, 0, SAR_FB_SIZE_0, 1);

View File

@ -50,6 +50,7 @@ MODULE_LICENSE("GPL v2");
static const char xillyname[] = "xillyusb"; static const char xillyname[] = "xillyusb";
static unsigned int fifo_buf_order; static unsigned int fifo_buf_order;
static struct workqueue_struct *wakeup_wq;
#define USB_VENDOR_ID_XILINX 0x03fd #define USB_VENDOR_ID_XILINX 0x03fd
#define USB_VENDOR_ID_ALTERA 0x09fb #define USB_VENDOR_ID_ALTERA 0x09fb
@ -569,10 +570,6 @@ static void cleanup_dev(struct kref *kref)
* errors if executed. The mechanism relies on that xdev->error is assigned * errors if executed. The mechanism relies on that xdev->error is assigned
* a non-zero value by report_io_error() prior to queueing wakeup_all(), * a non-zero value by report_io_error() prior to queueing wakeup_all(),
* which prevents bulk_in_work() from calling process_bulk_in(). * which prevents bulk_in_work() from calling process_bulk_in().
*
* The fact that wakeup_all() and bulk_in_work() are queued on the same
* workqueue makes their concurrent execution very unlikely, however the
* kernel's API doesn't seem to ensure this strictly.
*/ */
static void wakeup_all(struct work_struct *work) static void wakeup_all(struct work_struct *work)
@ -627,7 +624,7 @@ static void report_io_error(struct xillyusb_dev *xdev,
if (do_once) { if (do_once) {
kref_get(&xdev->kref); /* xdev is used by work item */ kref_get(&xdev->kref); /* xdev is used by work item */
queue_work(xdev->workq, &xdev->wakeup_workitem); queue_work(wakeup_wq, &xdev->wakeup_workitem);
} }
} }
@ -1906,6 +1903,13 @@ static const struct file_operations xillyusb_fops = {
static int xillyusb_setup_base_eps(struct xillyusb_dev *xdev) static int xillyusb_setup_base_eps(struct xillyusb_dev *xdev)
{ {
struct usb_device *udev = xdev->udev;
/* Verify that device has the two fundamental bulk in/out endpoints */
if (usb_pipe_type_check(udev, usb_sndbulkpipe(udev, MSG_EP_NUM)) ||
usb_pipe_type_check(udev, usb_rcvbulkpipe(udev, IN_EP_NUM)))
return -ENODEV;
xdev->msg_ep = endpoint_alloc(xdev, MSG_EP_NUM | USB_DIR_OUT, xdev->msg_ep = endpoint_alloc(xdev, MSG_EP_NUM | USB_DIR_OUT,
bulk_out_work, 1, 2); bulk_out_work, 1, 2);
if (!xdev->msg_ep) if (!xdev->msg_ep)
@ -1935,14 +1939,15 @@ static int setup_channels(struct xillyusb_dev *xdev,
__le16 *chandesc, __le16 *chandesc,
int num_channels) int num_channels)
{ {
struct xillyusb_channel *chan; struct usb_device *udev = xdev->udev;
struct xillyusb_channel *chan, *new_channels;
int i; int i;
chan = kcalloc(num_channels, sizeof(*chan), GFP_KERNEL); chan = kcalloc(num_channels, sizeof(*chan), GFP_KERNEL);
if (!chan) if (!chan)
return -ENOMEM; return -ENOMEM;
xdev->channels = chan; new_channels = chan;
for (i = 0; i < num_channels; i++, chan++) { for (i = 0; i < num_channels; i++, chan++) {
unsigned int in_desc = le16_to_cpu(*chandesc++); unsigned int in_desc = le16_to_cpu(*chandesc++);
@ -1971,6 +1976,15 @@ static int setup_channels(struct xillyusb_dev *xdev,
*/ */
if ((out_desc & 0x80) && i < 14) { /* Entry is valid */ if ((out_desc & 0x80) && i < 14) { /* Entry is valid */
if (usb_pipe_type_check(udev,
usb_sndbulkpipe(udev, i + 2))) {
dev_err(xdev->dev,
"Missing BULK OUT endpoint %d\n",
i + 2);
kfree(new_channels);
return -ENODEV;
}
chan->writable = 1; chan->writable = 1;
chan->out_synchronous = !!(out_desc & 0x40); chan->out_synchronous = !!(out_desc & 0x40);
chan->out_seekable = !!(out_desc & 0x20); chan->out_seekable = !!(out_desc & 0x20);
@ -1980,6 +1994,7 @@ static int setup_channels(struct xillyusb_dev *xdev,
} }
} }
xdev->channels = new_channels;
return 0; return 0;
} }
@ -2096,9 +2111,11 @@ static int xillyusb_discovery(struct usb_interface *interface)
* just after responding with the IDT, there is no reason for any * just after responding with the IDT, there is no reason for any
* work item to be running now. To be sure that xdev->channels * work item to be running now. To be sure that xdev->channels
* is updated on anything that might run in parallel, flush the * is updated on anything that might run in parallel, flush the
* workqueue, which rarely does anything. * device's workqueue and the wakeup work item. This rarely
* does anything.
*/ */
flush_workqueue(xdev->workq); flush_workqueue(xdev->workq);
flush_work(&xdev->wakeup_workitem);
xdev->num_channels = num_channels; xdev->num_channels = num_channels;
@ -2258,6 +2275,10 @@ static int __init xillyusb_init(void)
{ {
int rc = 0; int rc = 0;
wakeup_wq = alloc_workqueue(xillyname, 0, 0);
if (!wakeup_wq)
return -ENOMEM;
if (LOG2_INITIAL_FIFO_BUF_SIZE > PAGE_SHIFT) if (LOG2_INITIAL_FIFO_BUF_SIZE > PAGE_SHIFT)
fifo_buf_order = LOG2_INITIAL_FIFO_BUF_SIZE - PAGE_SHIFT; fifo_buf_order = LOG2_INITIAL_FIFO_BUF_SIZE - PAGE_SHIFT;
else else
@ -2265,12 +2286,17 @@ static int __init xillyusb_init(void)
rc = usb_register(&xillyusb_driver); rc = usb_register(&xillyusb_driver);
if (rc)
destroy_workqueue(wakeup_wq);
return rc; return rc;
} }
static void __exit xillyusb_exit(void) static void __exit xillyusb_exit(void)
{ {
usb_deregister(&xillyusb_driver); usb_deregister(&xillyusb_driver);
destroy_workqueue(wakeup_wq);
} }
module_init(xillyusb_init); module_init(xillyusb_init);

View File

@ -738,7 +738,7 @@ static struct ccu_div vp_axi_clk = {
.hw.init = CLK_HW_INIT_PARENTS_HW("vp-axi", .hw.init = CLK_HW_INIT_PARENTS_HW("vp-axi",
video_pll_clk_parent, video_pll_clk_parent,
&ccu_div_ops, &ccu_div_ops,
0), CLK_IGNORE_UNUSED),
}, },
}; };

View File

@ -39,6 +39,8 @@
#define MLXBF_GPIO_CAUSE_OR_EVTEN0 0x14 #define MLXBF_GPIO_CAUSE_OR_EVTEN0 0x14
#define MLXBF_GPIO_CAUSE_OR_CLRCAUSE 0x18 #define MLXBF_GPIO_CAUSE_OR_CLRCAUSE 0x18
#define MLXBF_GPIO_CLR_ALL_INTS GENMASK(31, 0)
struct mlxbf3_gpio_context { struct mlxbf3_gpio_context {
struct gpio_chip gc; struct gpio_chip gc;
@ -82,6 +84,8 @@ static void mlxbf3_gpio_irq_disable(struct irq_data *irqd)
val = readl(gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_EVTEN0); val = readl(gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_EVTEN0);
val &= ~BIT(offset); val &= ~BIT(offset);
writel(val, gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_EVTEN0); writel(val, gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_EVTEN0);
writel(BIT(offset), gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_CLRCAUSE);
raw_spin_unlock_irqrestore(&gs->gc.bgpio_lock, flags); raw_spin_unlock_irqrestore(&gs->gc.bgpio_lock, flags);
gpiochip_disable_irq(gc, offset); gpiochip_disable_irq(gc, offset);
@ -253,6 +257,15 @@ static int mlxbf3_gpio_probe(struct platform_device *pdev)
return 0; return 0;
} }
static void mlxbf3_gpio_shutdown(struct platform_device *pdev)
{
struct mlxbf3_gpio_context *gs = platform_get_drvdata(pdev);
/* Disable and clear all interrupts */
writel(0, gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_EVTEN0);
writel(MLXBF_GPIO_CLR_ALL_INTS, gs->gpio_cause_io + MLXBF_GPIO_CAUSE_OR_CLRCAUSE);
}
static const struct acpi_device_id mlxbf3_gpio_acpi_match[] = { static const struct acpi_device_id mlxbf3_gpio_acpi_match[] = {
{ "MLNXBF33", 0 }, { "MLNXBF33", 0 },
{} {}
@ -265,6 +278,7 @@ static struct platform_driver mlxbf3_gpio_driver = {
.acpi_match_table = mlxbf3_gpio_acpi_match, .acpi_match_table = mlxbf3_gpio_acpi_match,
}, },
.probe = mlxbf3_gpio_probe, .probe = mlxbf3_gpio_probe,
.shutdown = mlxbf3_gpio_shutdown,
}; };
module_platform_driver(mlxbf3_gpio_driver); module_platform_driver(mlxbf3_gpio_driver);

View File

@ -1057,6 +1057,9 @@ static int amdgpu_cs_patch_ibs(struct amdgpu_cs_parser *p,
r = amdgpu_ring_parse_cs(ring, p, job, ib); r = amdgpu_ring_parse_cs(ring, p, job, ib);
if (r) if (r)
return r; return r;
if (ib->sa_bo)
ib->gpu_addr = amdgpu_sa_bo_gpu_addr(ib->sa_bo);
} else { } else {
ib->ptr = (uint32_t *)kptr; ib->ptr = (uint32_t *)kptr;
r = amdgpu_ring_patch_cs_in_place(ring, p, job, ib); r = amdgpu_ring_patch_cs_in_place(ring, p, job, ib);

View File

@ -685,16 +685,24 @@ int amdgpu_ctx_ioctl(struct drm_device *dev, void *data,
switch (args->in.op) { switch (args->in.op) {
case AMDGPU_CTX_OP_ALLOC_CTX: case AMDGPU_CTX_OP_ALLOC_CTX:
if (args->in.flags)
return -EINVAL;
r = amdgpu_ctx_alloc(adev, fpriv, filp, priority, &id); r = amdgpu_ctx_alloc(adev, fpriv, filp, priority, &id);
args->out.alloc.ctx_id = id; args->out.alloc.ctx_id = id;
break; break;
case AMDGPU_CTX_OP_FREE_CTX: case AMDGPU_CTX_OP_FREE_CTX:
if (args->in.flags)
return -EINVAL;
r = amdgpu_ctx_free(fpriv, id); r = amdgpu_ctx_free(fpriv, id);
break; break;
case AMDGPU_CTX_OP_QUERY_STATE: case AMDGPU_CTX_OP_QUERY_STATE:
if (args->in.flags)
return -EINVAL;
r = amdgpu_ctx_query(adev, fpriv, id, &args->out); r = amdgpu_ctx_query(adev, fpriv, id, &args->out);
break; break;
case AMDGPU_CTX_OP_QUERY_STATE2: case AMDGPU_CTX_OP_QUERY_STATE2:
if (args->in.flags)
return -EINVAL;
r = amdgpu_ctx_query2(adev, fpriv, id, &args->out); r = amdgpu_ctx_query2(adev, fpriv, id, &args->out);
break; break;
case AMDGPU_CTX_OP_GET_STABLE_PSTATE: case AMDGPU_CTX_OP_GET_STABLE_PSTATE:

View File

@ -509,6 +509,16 @@ int amdgpu_gfx_disable_kcq(struct amdgpu_device *adev, int xcc_id)
int i, r = 0; int i, r = 0;
int j; int j;
if (adev->enable_mes) {
for (i = 0; i < adev->gfx.num_compute_rings; i++) {
j = i + xcc_id * adev->gfx.num_compute_rings;
amdgpu_mes_unmap_legacy_queue(adev,
&adev->gfx.compute_ring[j],
RESET_QUEUES, 0, 0);
}
return 0;
}
if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues) if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
return -EINVAL; return -EINVAL;
@ -551,6 +561,18 @@ int amdgpu_gfx_disable_kgq(struct amdgpu_device *adev, int xcc_id)
int i, r = 0; int i, r = 0;
int j; int j;
if (adev->enable_mes) {
if (amdgpu_gfx_is_master_xcc(adev, xcc_id)) {
for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
j = i + xcc_id * adev->gfx.num_gfx_rings;
amdgpu_mes_unmap_legacy_queue(adev,
&adev->gfx.gfx_ring[j],
PREEMPT_QUEUES, 0, 0);
}
}
return 0;
}
if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues) if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
return -EINVAL; return -EINVAL;
@ -995,7 +1017,7 @@ uint32_t amdgpu_kiq_rreg(struct amdgpu_device *adev, uint32_t reg, uint32_t xcc_
if (amdgpu_device_skip_hw_access(adev)) if (amdgpu_device_skip_hw_access(adev))
return 0; return 0;
if (adev->mes.ring.sched.ready) if (adev->mes.ring[0].sched.ready)
return amdgpu_mes_rreg(adev, reg); return amdgpu_mes_rreg(adev, reg);
BUG_ON(!ring->funcs->emit_rreg); BUG_ON(!ring->funcs->emit_rreg);
@ -1065,7 +1087,7 @@ void amdgpu_kiq_wreg(struct amdgpu_device *adev, uint32_t reg, uint32_t v, uint3
if (amdgpu_device_skip_hw_access(adev)) if (amdgpu_device_skip_hw_access(adev))
return; return;
if (adev->mes.ring.sched.ready) { if (adev->mes.ring[0].sched.ready) {
amdgpu_mes_wreg(adev, reg, v); amdgpu_mes_wreg(adev, reg, v);
return; return;
} }

View File

@ -589,7 +589,8 @@ int amdgpu_gmc_allocate_vm_inv_eng(struct amdgpu_device *adev)
ring = adev->rings[i]; ring = adev->rings[i];
vmhub = ring->vm_hub; vmhub = ring->vm_hub;
if (ring == &adev->mes.ring || if (ring == &adev->mes.ring[0] ||
ring == &adev->mes.ring[1] ||
ring == &adev->umsch_mm.ring) ring == &adev->umsch_mm.ring)
continue; continue;
@ -761,7 +762,7 @@ void amdgpu_gmc_fw_reg_write_reg_wait(struct amdgpu_device *adev,
unsigned long flags; unsigned long flags;
uint32_t seq; uint32_t seq;
if (adev->mes.ring.sched.ready) { if (adev->mes.ring[0].sched.ready) {
amdgpu_mes_reg_write_reg_wait(adev, reg0, reg1, amdgpu_mes_reg_write_reg_wait(adev, reg0, reg1,
ref, mask); ref, mask);
return; return;

View File

@ -135,9 +135,11 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
idr_init(&adev->mes.queue_id_idr); idr_init(&adev->mes.queue_id_idr);
ida_init(&adev->mes.doorbell_ida); ida_init(&adev->mes.doorbell_ida);
spin_lock_init(&adev->mes.queue_id_lock); spin_lock_init(&adev->mes.queue_id_lock);
spin_lock_init(&adev->mes.ring_lock);
mutex_init(&adev->mes.mutex_hidden); mutex_init(&adev->mes.mutex_hidden);
for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++)
spin_lock_init(&adev->mes.ring_lock[i]);
adev->mes.total_max_queue = AMDGPU_FENCE_MES_QUEUE_ID_MASK; adev->mes.total_max_queue = AMDGPU_FENCE_MES_QUEUE_ID_MASK;
adev->mes.vmid_mask_mmhub = 0xffffff00; adev->mes.vmid_mask_mmhub = 0xffffff00;
adev->mes.vmid_mask_gfxhub = 0xffffff00; adev->mes.vmid_mask_gfxhub = 0xffffff00;
@ -163,36 +165,38 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
adev->mes.sdma_hqd_mask[i] = 0xfc; adev->mes.sdma_hqd_mask[i] = 0xfc;
} }
r = amdgpu_device_wb_get(adev, &adev->mes.sch_ctx_offs); for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++) {
if (r) { r = amdgpu_device_wb_get(adev, &adev->mes.sch_ctx_offs[i]);
dev_err(adev->dev, if (r) {
"(%d) ring trail_fence_offs wb alloc failed\n", r); dev_err(adev->dev,
goto error_ids; "(%d) ring trail_fence_offs wb alloc failed\n",
} r);
adev->mes.sch_ctx_gpu_addr = goto error;
adev->wb.gpu_addr + (adev->mes.sch_ctx_offs * 4); }
adev->mes.sch_ctx_ptr = adev->mes.sch_ctx_gpu_addr[i] =
(uint64_t *)&adev->wb.wb[adev->mes.sch_ctx_offs]; adev->wb.gpu_addr + (adev->mes.sch_ctx_offs[i] * 4);
adev->mes.sch_ctx_ptr[i] =
(uint64_t *)&adev->wb.wb[adev->mes.sch_ctx_offs[i]];
r = amdgpu_device_wb_get(adev, &adev->mes.query_status_fence_offs); r = amdgpu_device_wb_get(adev,
if (r) { &adev->mes.query_status_fence_offs[i]);
amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs); if (r) {
dev_err(adev->dev, dev_err(adev->dev,
"(%d) query_status_fence_offs wb alloc failed\n", r); "(%d) query_status_fence_offs wb alloc failed\n",
goto error_ids; r);
goto error;
}
adev->mes.query_status_fence_gpu_addr[i] = adev->wb.gpu_addr +
(adev->mes.query_status_fence_offs[i] * 4);
adev->mes.query_status_fence_ptr[i] =
(uint64_t *)&adev->wb.wb[adev->mes.query_status_fence_offs[i]];
} }
adev->mes.query_status_fence_gpu_addr =
adev->wb.gpu_addr + (adev->mes.query_status_fence_offs * 4);
adev->mes.query_status_fence_ptr =
(uint64_t *)&adev->wb.wb[adev->mes.query_status_fence_offs];
r = amdgpu_device_wb_get(adev, &adev->mes.read_val_offs); r = amdgpu_device_wb_get(adev, &adev->mes.read_val_offs);
if (r) { if (r) {
amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs);
amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs);
dev_err(adev->dev, dev_err(adev->dev,
"(%d) read_val_offs alloc failed\n", r); "(%d) read_val_offs alloc failed\n", r);
goto error_ids; goto error;
} }
adev->mes.read_val_gpu_addr = adev->mes.read_val_gpu_addr =
adev->wb.gpu_addr + (adev->mes.read_val_offs * 4); adev->wb.gpu_addr + (adev->mes.read_val_offs * 4);
@ -212,10 +216,16 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
error_doorbell: error_doorbell:
amdgpu_mes_doorbell_free(adev); amdgpu_mes_doorbell_free(adev);
error: error:
amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs); for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++) {
amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs); if (adev->mes.sch_ctx_ptr[i])
amdgpu_device_wb_free(adev, adev->mes.read_val_offs); amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs[i]);
error_ids: if (adev->mes.query_status_fence_ptr[i])
amdgpu_device_wb_free(adev,
adev->mes.query_status_fence_offs[i]);
}
if (adev->mes.read_val_ptr)
amdgpu_device_wb_free(adev, adev->mes.read_val_offs);
idr_destroy(&adev->mes.pasid_idr); idr_destroy(&adev->mes.pasid_idr);
idr_destroy(&adev->mes.gang_id_idr); idr_destroy(&adev->mes.gang_id_idr);
idr_destroy(&adev->mes.queue_id_idr); idr_destroy(&adev->mes.queue_id_idr);
@ -226,13 +236,22 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
void amdgpu_mes_fini(struct amdgpu_device *adev) void amdgpu_mes_fini(struct amdgpu_device *adev)
{ {
int i;
amdgpu_bo_free_kernel(&adev->mes.event_log_gpu_obj, amdgpu_bo_free_kernel(&adev->mes.event_log_gpu_obj,
&adev->mes.event_log_gpu_addr, &adev->mes.event_log_gpu_addr,
&adev->mes.event_log_cpu_addr); &adev->mes.event_log_cpu_addr);
amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs); for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++) {
amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs); if (adev->mes.sch_ctx_ptr[i])
amdgpu_device_wb_free(adev, adev->mes.read_val_offs); amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs[i]);
if (adev->mes.query_status_fence_ptr[i])
amdgpu_device_wb_free(adev,
adev->mes.query_status_fence_offs[i]);
}
if (adev->mes.read_val_ptr)
amdgpu_device_wb_free(adev, adev->mes.read_val_offs);
amdgpu_mes_doorbell_free(adev); amdgpu_mes_doorbell_free(adev);
idr_destroy(&adev->mes.pasid_idr); idr_destroy(&adev->mes.pasid_idr);
@ -1499,7 +1518,7 @@ int amdgpu_mes_init_microcode(struct amdgpu_device *adev, int pipe)
amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix,
sizeof(ucode_prefix)); sizeof(ucode_prefix));
if (adev->enable_uni_mes && pipe == AMDGPU_MES_SCHED_PIPE) { if (adev->enable_uni_mes) {
snprintf(fw_name, sizeof(fw_name), snprintf(fw_name, sizeof(fw_name),
"amdgpu/%s_uni_mes.bin", ucode_prefix); "amdgpu/%s_uni_mes.bin", ucode_prefix);
} else if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0) && } else if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0) &&

View File

@ -82,8 +82,8 @@ struct amdgpu_mes {
uint64_t default_process_quantum; uint64_t default_process_quantum;
uint64_t default_gang_quantum; uint64_t default_gang_quantum;
struct amdgpu_ring ring; struct amdgpu_ring ring[AMDGPU_MAX_MES_PIPES];
spinlock_t ring_lock; spinlock_t ring_lock[AMDGPU_MAX_MES_PIPES];
const struct firmware *fw[AMDGPU_MAX_MES_PIPES]; const struct firmware *fw[AMDGPU_MAX_MES_PIPES];
@ -112,12 +112,12 @@ struct amdgpu_mes {
uint32_t gfx_hqd_mask[AMDGPU_MES_MAX_GFX_PIPES]; uint32_t gfx_hqd_mask[AMDGPU_MES_MAX_GFX_PIPES];
uint32_t sdma_hqd_mask[AMDGPU_MES_MAX_SDMA_PIPES]; uint32_t sdma_hqd_mask[AMDGPU_MES_MAX_SDMA_PIPES];
uint32_t aggregated_doorbells[AMDGPU_MES_PRIORITY_NUM_LEVELS]; uint32_t aggregated_doorbells[AMDGPU_MES_PRIORITY_NUM_LEVELS];
uint32_t sch_ctx_offs; uint32_t sch_ctx_offs[AMDGPU_MAX_MES_PIPES];
uint64_t sch_ctx_gpu_addr; uint64_t sch_ctx_gpu_addr[AMDGPU_MAX_MES_PIPES];
uint64_t *sch_ctx_ptr; uint64_t *sch_ctx_ptr[AMDGPU_MAX_MES_PIPES];
uint32_t query_status_fence_offs; uint32_t query_status_fence_offs[AMDGPU_MAX_MES_PIPES];
uint64_t query_status_fence_gpu_addr; uint64_t query_status_fence_gpu_addr[AMDGPU_MAX_MES_PIPES];
uint64_t *query_status_fence_ptr; uint64_t *query_status_fence_ptr[AMDGPU_MAX_MES_PIPES];
uint32_t read_val_offs; uint32_t read_val_offs;
uint64_t read_val_gpu_addr; uint64_t read_val_gpu_addr;
uint32_t *read_val_ptr; uint32_t *read_val_ptr;

View File

@ -212,6 +212,8 @@ int amdgpu_ring_init(struct amdgpu_device *adev, struct amdgpu_ring *ring,
*/ */
if (ring->funcs->type == AMDGPU_RING_TYPE_KIQ) if (ring->funcs->type == AMDGPU_RING_TYPE_KIQ)
sched_hw_submission = max(sched_hw_submission, 256); sched_hw_submission = max(sched_hw_submission, 256);
if (ring->funcs->type == AMDGPU_RING_TYPE_MES)
sched_hw_submission = 8;
else if (ring == &adev->sdma.instance[0].page) else if (ring == &adev->sdma.instance[0].page)
sched_hw_submission = 256; sched_hw_submission = 256;

View File

@ -461,8 +461,11 @@ struct amdgpu_vcn5_fw_shared {
struct amdgpu_fw_shared_unified_queue_struct sq; struct amdgpu_fw_shared_unified_queue_struct sq;
uint8_t pad1[8]; uint8_t pad1[8];
struct amdgpu_fw_shared_fw_logging fw_log; struct amdgpu_fw_shared_fw_logging fw_log;
uint8_t pad2[20];
struct amdgpu_fw_shared_rb_setup rb_setup; struct amdgpu_fw_shared_rb_setup rb_setup;
uint8_t pad2[4]; struct amdgpu_fw_shared_smu_interface_info smu_dpm_interface;
struct amdgpu_fw_shared_drm_key_wa drm_key_wa;
uint8_t pad3[9];
}; };
#define VCN_BLOCK_ENCODE_DISABLE_MASK 0x80 #define VCN_BLOCK_ENCODE_DISABLE_MASK 0x80

View File

@ -858,7 +858,7 @@ void amdgpu_virt_post_reset(struct amdgpu_device *adev)
adev->gfx.is_poweron = false; adev->gfx.is_poweron = false;
} }
adev->mes.ring.sched.ready = false; adev->mes.ring[0].sched.ready = false;
} }
bool amdgpu_virt_fw_load_skip_check(struct amdgpu_device *adev, uint32_t ucode_id) bool amdgpu_virt_fw_load_skip_check(struct amdgpu_device *adev, uint32_t ucode_id)

View File

@ -3546,33 +3546,9 @@ static int gfx_v12_0_hw_init(void *handle)
return r; return r;
} }
static int gfx_v12_0_kiq_disable_kgq(struct amdgpu_device *adev)
{
struct amdgpu_kiq *kiq = &adev->gfx.kiq[0];
struct amdgpu_ring *kiq_ring = &kiq->ring;
int i, r = 0;
if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
return -EINVAL;
if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size *
adev->gfx.num_gfx_rings))
return -ENOMEM;
for (i = 0; i < adev->gfx.num_gfx_rings; i++)
kiq->pmf->kiq_unmap_queues(kiq_ring, &adev->gfx.gfx_ring[i],
PREEMPT_QUEUES, 0, 0);
if (adev->gfx.kiq[0].ring.sched.ready)
r = amdgpu_ring_test_helper(kiq_ring);
return r;
}
static int gfx_v12_0_hw_fini(void *handle) static int gfx_v12_0_hw_fini(void *handle)
{ {
struct amdgpu_device *adev = (struct amdgpu_device *)handle; struct amdgpu_device *adev = (struct amdgpu_device *)handle;
int r;
uint32_t tmp; uint32_t tmp;
amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0); amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0);
@ -3580,8 +3556,7 @@ static int gfx_v12_0_hw_fini(void *handle)
if (!adev->no_hw_access) { if (!adev->no_hw_access) {
if (amdgpu_async_gfx_ring) { if (amdgpu_async_gfx_ring) {
r = gfx_v12_0_kiq_disable_kgq(adev); if (amdgpu_gfx_disable_kgq(adev, 0))
if (r)
DRM_ERROR("KGQ disable failed\n"); DRM_ERROR("KGQ disable failed\n");
} }

View File

@ -231,7 +231,7 @@ static void gmc_v11_0_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
/* This is necessary for SRIOV as well as for GFXOFF to function /* This is necessary for SRIOV as well as for GFXOFF to function
* properly under bare metal * properly under bare metal
*/ */
if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring.sched.ready) && if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring[0].sched.ready) &&
(amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev))) { (amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev))) {
amdgpu_gmc_fw_reg_write_reg_wait(adev, req, ack, inv_req, amdgpu_gmc_fw_reg_write_reg_wait(adev, req, ack, inv_req,
1 << vmid, GET_INST(GC, 0)); 1 << vmid, GET_INST(GC, 0));

View File

@ -299,7 +299,7 @@ static void gmc_v12_0_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
/* This is necessary for SRIOV as well as for GFXOFF to function /* This is necessary for SRIOV as well as for GFXOFF to function
* properly under bare metal * properly under bare metal
*/ */
if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring.sched.ready) && if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring[0].sched.ready) &&
(amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev))) { (amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev))) {
struct amdgpu_vmhub *hub = &adev->vmhub[vmhub]; struct amdgpu_vmhub *hub = &adev->vmhub[vmhub];
const unsigned eng = 17; const unsigned eng = 17;

Some files were not shown because too many files have changed in this diff Show More