Merge branch 'for-6.7/cxl-rch-eh' into cxl/next

Restricted CXL Host (RCH) Error Handling undoes the topology munging of
CXL 1.1 to enabled some AER recovery, and lands some base infrastructure
for handling Root-Complex-Event-Collectors (RCECs) with CXL. Include
this long running series finally for v6.7.
This commit is contained in:
Dan Williams 2023-10-31 10:59:00 -07:00
commit 7f946e6d83
248 changed files with 3044 additions and 1787 deletions

View File

@ -92,6 +92,13 @@ Brief summary of control files.
memory.oom_control set/show oom controls.
memory.numa_stat show the number of memory usage per numa
node
memory.kmem.limit_in_bytes Deprecated knob to set and read the kernel
memory hard limit. Kernel hard limit is not
supported since 5.16. Writing any value to
do file will not have any effect same as if
nokmem kernel parameter was specified.
Kernel memory is still charged and reported
by memory.kmem.usage_in_bytes.
memory.kmem.usage_in_bytes show current kernel memory allocation
memory.kmem.failcnt show the number of kernel memory usage
hits limits

View File

@ -38,6 +38,7 @@ patternProperties:
ID number 0 and the slave drive will have ID number 1. The PATA port
nodes will be named "ide-port".
type: object
additionalProperties: false
properties:
reg:

View File

@ -37,6 +37,9 @@ properties:
maxItems: 1
'#clock-cells':
description:
The index in the assigned-clocks is mapped to the output clock as below
0 - REF, 1 - SE1, 2 - SE2, 3 - SE3, 4 - DIFF1, 5 - DIFF2.
const: 1
clocks:
@ -68,7 +71,7 @@ examples:
reg = <0x68>;
#clock-cells = <1>;
clocks = <&x1_x2>;
clocks = <&x1>;
renesas,settings = [
80 00 11 19 4c 02 23 7f 83 19 08 a9 5f 25 24 bf
@ -79,8 +82,8 @@ examples:
assigned-clocks = <&versa3 0>, <&versa3 1>,
<&versa3 2>, <&versa3 3>,
<&versa3 4>, <&versa3 5>;
assigned-clock-rates = <12288000>, <25000000>,
<12000000>, <11289600>,
<11289600>, <24000000>;
assigned-clock-rates = <24000000>, <11289600>,
<11289600>, <12000000>,
<25000000>, <12288000>;
};
};

View File

@ -9,6 +9,9 @@ title: Freescale MXS Inter IC (I2C) Controller
maintainers:
- Shawn Guo <shawnguo@kernel.org>
allOf:
- $ref: /schemas/i2c/i2c-controller.yaml#
properties:
compatible:
enum:
@ -37,7 +40,7 @@ required:
- dmas
- dma-names
additionalProperties: false
unevaluatedProperties: false
examples:
- |

View File

@ -11,11 +11,16 @@ maintainers:
properties:
compatible:
items:
- enum:
- loongson,ls2k0500-pmc
- loongson,ls2k1000-pmc
- const: syscon
oneOf:
- items:
- const: loongson,ls2k0500-pmc
- const: syscon
- items:
- enum:
- loongson,ls2k1000-pmc
- loongson,ls2k2000-pmc
- const: loongson,ls2k0500-pmc
- const: syscon
reg:
maxItems: 1
@ -32,6 +37,18 @@ properties:
addition, the PM need according to it to indicate that current
SoC whether support Suspend To RAM.
syscon-poweroff:
$ref: /schemas/power/reset/syscon-poweroff.yaml#
type: object
description:
Node for power off method
syscon-reboot:
$ref: /schemas/power/reset/syscon-reboot.yaml#
type: object
description:
Node for reboot method
required:
- compatible
- reg
@ -44,9 +61,23 @@ examples:
#include <dt-bindings/interrupt-controller/irq.h>
power-management@1fe27000 {
compatible = "loongson,ls2k1000-pmc", "syscon";
compatible = "loongson,ls2k1000-pmc", "loongson,ls2k0500-pmc", "syscon";
reg = <0x1fe27000 0x58>;
interrupt-parent = <&liointc1>;
interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
loongson,suspend-address = <0x0 0x1c000500>;
syscon-reboot {
compatible = "syscon-reboot";
offset = <0x30>;
mask = <0x1>;
};
syscon-poweroff {
compatible = "syscon-poweroff";
regmap = <&pmc>;
offset = <0x14>;
mask = <0x3c00>;
value = <0x3c00>;
};
};

View File

@ -22,6 +22,13 @@ properties:
- const: fsl,imx35-cspi
- const: fsl,imx51-ecspi
- const: fsl,imx53-ecspi
- items:
- enum:
- fsl,imx25-cspi
- fsl,imx50-cspi
- fsl,imx51-cspi
- fsl,imx53-cspi
- const: fsl,imx35-cspi
- items:
- const: fsl,imx8mp-ecspi
- const: fsl,imx6ul-ecspi

View File

@ -949,3 +949,99 @@ mmap_lock held. All in-tree users have been audited and do not seem to
depend on the mmap_lock being held, but out of tree users should verify
for themselves. If they do need it, they can return VM_FAULT_RETRY to
be called with the mmap_lock held.
---
**mandatory**
The order of opening block devices and matching or creating superblocks has
changed.
The old logic opened block devices first and then tried to find a
suitable superblock to reuse based on the block device pointer.
The new logic tries to find a suitable superblock first based on the device
number, and opening the block device afterwards.
Since opening block devices cannot happen under s_umount because of lock
ordering requirements s_umount is now dropped while opening block devices and
reacquired before calling fill_super().
In the old logic concurrent mounters would find the superblock on the list of
superblocks for the filesystem type. Since the first opener of the block device
would hold s_umount they would wait until the superblock became either born or
was discarded due to initialization failure.
Since the new logic drops s_umount concurrent mounters could grab s_umount and
would spin. Instead they are now made to wait using an explicit wait-wake
mechanism without having to hold s_umount.
---
**mandatory**
The holder of a block device is now the superblock.
The holder of a block device used to be the file_system_type which wasn't
particularly useful. It wasn't possible to go from block device to owning
superblock without matching on the device pointer stored in the superblock.
This mechanism would only work for a single device so the block layer couldn't
find the owning superblock of any additional devices.
In the old mechanism reusing or creating a superblock for a racing mount(2) and
umount(2) relied on the file_system_type as the holder. This was severly
underdocumented however:
(1) Any concurrent mounter that managed to grab an active reference on an
existing superblock was made to wait until the superblock either became
ready or until the superblock was removed from the list of superblocks of
the filesystem type. If the superblock is ready the caller would simple
reuse it.
(2) If the mounter came after deactivate_locked_super() but before
the superblock had been removed from the list of superblocks of the
filesystem type the mounter would wait until the superblock was shutdown,
reuse the block device and allocate a new superblock.
(3) If the mounter came after deactivate_locked_super() and after
the superblock had been removed from the list of superblocks of the
filesystem type the mounter would reuse the block device and allocate a new
superblock (the bd_holder point may still be set to the filesystem type).
Because the holder of the block device was the file_system_type any concurrent
mounter could open the block devices of any superblock of the same
file_system_type without risking seeing EBUSY because the block device was
still in use by another superblock.
Making the superblock the owner of the block device changes this as the holder
is now a unique superblock and thus block devices associated with it cannot be
reused by concurrent mounters. So a concurrent mounter in (2) could suddenly
see EBUSY when trying to open a block device whose holder was a different
superblock.
The new logic thus waits until the superblock and the devices are shutdown in
->kill_sb(). Removal of the superblock from the list of superblocks of the
filesystem type is now moved to a later point when the devices are closed:
(1) Any concurrent mounter managing to grab an active reference on an existing
superblock is made to wait until the superblock is either ready or until
the superblock and all devices are shutdown in ->kill_sb(). If the
superblock is ready the caller will simply reuse it.
(2) If the mounter comes after deactivate_locked_super() but before
the superblock has been removed from the list of superblocks of the
filesystem type the mounter is made to wait until the superblock and the
devices are shut down in ->kill_sb() and the superblock is removed from the
list of superblocks of the filesystem type. The mounter will allocate a new
superblock and grab ownership of the block device (the bd_holder pointer of
the block device will be set to the newly allocated superblock).
(3) This case is now collapsed into (2) as the superblock is left on the list
of superblocks of the filesystem type until all devices are shutdown in
->kill_sb(). In other words, if the superblock isn't on the list of
superblock of the filesystem type anymore then it has given up ownership of
all associated block devices (the bd_holder pointer is NULL).
As this is a VFS level change it has no practical consequences for filesystems
other than that all of them must use one of the provided kill_litter_super(),
kill_anon_super(), or kill_block_super() helpers.

View File

@ -573,6 +573,32 @@ above, leading to:
bool "Support for foo hardware"
depends on ARCH_FOO_VENDOR || COMPILE_TEST
Optional dependencies
~~~~~~~~~~~~~~~~~~~~~
Some drivers are able to optionally use a feature from another module
or build cleanly with that module disabled, but cause a link failure
when trying to use that loadable module from a built-in driver.
The most common way to express this optional dependency in Kconfig logic
uses the slightly counterintuitive::
config FOO
tristate "Support for foo hardware"
depends on BAR || !BAR
This means that there is either a dependency on BAR that disallows
the combination of FOO=y with BAR=m, or BAR is completely disabled.
For a more formalized approach if there are multiple drivers that have
the same dependency, a helper symbol can be used, like::
config FOO
tristate "Support for foo hardware"
depends on BAR_OPTIONAL
config BAR_OPTIONAL
def_tristate BAR || !BAR
Kconfig recursive dependency limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -1963,12 +1963,12 @@ F: drivers/irqchip/irq-aspeed-i2c-ic.c
ARM/ASPEED MACHINE SUPPORT
M: Joel Stanley <joel@jms.id.au>
R: Andrew Jeffery <andrew@aj.id.au>
R: Andrew Jeffery <andrew@codeconstruct.com.au>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers)
S: Supported
Q: https://patchwork.ozlabs.org/project/linux-aspeed/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joel/aspeed.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/joel/bmc.git
F: Documentation/devicetree/bindings/arm/aspeed/
F: arch/arm/boot/dts/aspeed/
F: arch/arm/mach-aspeed/
@ -3058,7 +3058,7 @@ F: Documentation/devicetree/bindings/peci/peci-aspeed.yaml
F: drivers/peci/controller/peci-aspeed.c
ASPEED PINCTRL DRIVERS
M: Andrew Jeffery <andrew@aj.id.au>
M: Andrew Jeffery <andrew@codeconstruct.com.au>
L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers)
L: openbmc@lists.ozlabs.org (moderated for non-subscribers)
L: linux-gpio@vger.kernel.org
@ -3075,7 +3075,7 @@ F: drivers/irqchip/irq-aspeed-scu-ic.c
F: include/dt-bindings/interrupt-controller/aspeed-scu-ic.h
ASPEED SD/MMC DRIVER
M: Andrew Jeffery <andrew@aj.id.au>
M: Andrew Jeffery <andrew@codeconstruct.com.au>
L: linux-aspeed@lists.ozlabs.org (moderated for non-subscribers)
L: openbmc@lists.ozlabs.org (moderated for non-subscribers)
L: linux-mmc@vger.kernel.org
@ -4082,7 +4082,7 @@ F: drivers/net/wireless/broadcom/brcm80211/
BROADCOM BRCMSTB GPIO DRIVER
M: Doug Berger <opendmb@gmail.com>
M: Florian Fainelli <florian.fainelli@broadcom>
M: Florian Fainelli <florian.fainelli@broadcom.com>
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
S: Supported
F: Documentation/devicetree/bindings/gpio/brcm,brcmstb-gpio.yaml
@ -6647,6 +6647,7 @@ F: drivers/gpu/drm/panel/panel-novatek-nt36672a.c
DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS
M: Karol Herbst <kherbst@redhat.com>
M: Lyude Paul <lyude@redhat.com>
M: Danilo Krummrich <dakr@redhat.com>
L: dri-devel@lists.freedesktop.org
L: nouveau@lists.freedesktop.org
S: Supported

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 6
SUBLEVEL = 0
EXTRAVERSION = -rc3
EXTRAVERSION = -rc4
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*

View File

@ -614,12 +614,12 @@ &rng_target {
/* Configure pwm clock source for timers 8 & 9 */
&timer8 {
assigned-clocks = <&abe_clkctrl OMAP4_TIMER8_CLKCTRL 24>;
assigned-clock-parents = <&sys_clkin_ck>;
assigned-clock-parents = <&sys_32k_ck>;
};
&timer9 {
assigned-clocks = <&l4_per_clkctrl OMAP4_TIMER9_CLKCTRL 24>;
assigned-clock-parents = <&sys_clkin_ck>;
assigned-clock-parents = <&sys_32k_ck>;
};
/*
@ -640,6 +640,7 @@ &uart1 {
&uart3 {
interrupts-extended = <&wakeupgen GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH
&omap4_pmx_core 0x17c>;
overrun-throttle-ms = <500>;
};
&uart4 {

View File

@ -12,8 +12,7 @@ cpu_thermal: cpu-thermal {
polling-delay = <1000>; /* milliseconds */
coefficients = <0 20000>;
/* sensor ID */
thermal-sensors = <&bandgap 0>;
thermal-sensors = <&bandgap>;
cpu_trips: trips {
cpu_alert0: cpu_alert {

View File

@ -12,7 +12,10 @@ cpu_thermal: cpu_thermal {
polling-delay-passive = <250>; /* milliseconds */
polling-delay = <1000>; /* milliseconds */
/* sensor ID */
/*
* See 44xx files for single sensor addressing, omap5 and dra7 need
* also sensor ID for addressing.
*/
thermal-sensors = <&bandgap 0>;
cpu_trips: trips {

View File

@ -69,6 +69,7 @@ abb_iva: regulator-abb-iva {
};
&cpu_thermal {
thermal-sensors = <&bandgap>;
coefficients = <0 20000>;
};

View File

@ -79,6 +79,7 @@ abb_iva: regulator-abb-iva {
};
&cpu_thermal {
thermal-sensors = <&bandgap>;
coefficients = <348 (-9301)>;
};

View File

@ -195,7 +195,7 @@ struct locomo_driver {
#define LOCOMO_DRIVER_NAME(_ldev) ((_ldev)->dev.driver->name)
void locomo_lcd_power(struct locomo_dev *, int, unsigned int);
extern void locomolcd_power(int on);
int locomo_driver_register(struct locomo_driver *);
void locomo_driver_unregister(struct locomo_driver *);

View File

@ -99,7 +99,7 @@ static int omap4_pm_suspend(void)
* possible causes.
* http://www.spinics.net/lists/arm-kernel/msg218641.html
*/
pr_warn("A possible cause could be an old bootloader - try u-boot >= v2012.07\n");
pr_debug("A possible cause could be an old bootloader - try u-boot >= v2012.07\n");
} else {
pr_info("Successfully put all powerdomains to target state\n");
}
@ -257,7 +257,7 @@ int __init omap4_pm_init(void)
* http://www.spinics.net/lists/arm-kernel/msg218641.html
*/
if (cpu_is_omap44xx())
pr_warn("OMAP4 PM: u-boot >= v2012.07 is required for full PM support\n");
pr_debug("OMAP4 PM: u-boot >= v2012.07 is required for full PM support\n");
ret = pwrdm_for_each(pwrdms_setup, NULL);
if (ret) {

View File

@ -16,8 +16,6 @@
#include "hardware.h" /* Gives GPIO_MAX */
extern void locomolcd_power(int on);
#define COLLIE_SCOOP_GPIO_BASE (GPIO_MAX + 1)
#define COLLIE_GPIO_CHARGE_ON (COLLIE_SCOOP_GPIO_BASE + 0)
#define COLLIE_SCP_DIAG_BOOT1 SCOOP_GPCR_PA12

View File

@ -58,11 +58,13 @@
((op & UNIPHIER_SSCOQM_S_MASK) == UNIPHIER_SSCOQM_S_RANGE)
/**
* uniphier_cache_data - UniPhier outer cache specific data
* struct uniphier_cache_data - UniPhier outer cache specific data
*
* @ctrl_base: virtual base address of control registers
* @rev_base: virtual base address of revision registers
* @op_base: virtual base address of operation registers
* @way_ctrl_base: virtual address of the way control registers for this
* SoC revision
* @way_mask: each bit specifies if the way is present
* @nsets: number of associativity sets
* @line_size: line size in bytes

View File

@ -66,6 +66,7 @@ dtb-$(CONFIG_ARCH_MXC) += imx8mm-mx8menlo.dtb
dtb-$(CONFIG_ARCH_MXC) += imx8mm-nitrogen-r2.dtb
dtb-$(CONFIG_ARCH_MXC) += imx8mm-phg.dtb
dtb-$(CONFIG_ARCH_MXC) += imx8mm-phyboard-polis-rdk.dtb
dtb-$(CONFIG_ARCH_MXC) += imx8mm-prt8mm.dtb
dtb-$(CONFIG_ARCH_MXC) += imx8mm-tqma8mqml-mba8mx.dtb
dtb-$(CONFIG_ARCH_MXC) += imx8mm-var-som-symphony.dtb
dtb-$(CONFIG_ARCH_MXC) += imx8mm-venice-gw71xx-0x.dtb

View File

@ -26,7 +26,7 @@ hdmi-connector {
port {
hdmi_connector_in: endpoint {
remote-endpoint = <&adv7533_out>;
remote-endpoint = <&adv7535_out>;
};
};
};
@ -72,6 +72,13 @@ reg_usdhc2_vmmc: regulator-usdhc2 {
enable-active-high;
};
reg_vddext_3v3: regulator-vddext-3v3 {
compatible = "regulator-fixed";
regulator-name = "VDDEXT_3V3";
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
backlight: backlight {
compatible = "pwm-backlight";
pwms = <&pwm1 0 5000000 0>;
@ -317,15 +324,16 @@ &i2c2 {
hdmi@3d {
compatible = "adi,adv7535";
reg = <0x3d>, <0x3c>, <0x3e>, <0x3f>;
reg-names = "main", "cec", "edid", "packet";
reg = <0x3d>;
interrupt-parent = <&gpio1>;
interrupts = <9 IRQ_TYPE_EDGE_FALLING>;
adi,dsi-lanes = <4>;
adi,input-depth = <8>;
adi,input-colorspace = "rgb";
adi,input-clock = "1x";
adi,input-style = <1>;
adi,input-justification = "evenly";
avdd-supply = <&buck5_reg>;
dvdd-supply = <&buck5_reg>;
pvdd-supply = <&buck5_reg>;
a2vdd-supply = <&buck5_reg>;
v3p3-supply = <&reg_vddext_3v3>;
v1p2-supply = <&buck5_reg>;
ports {
#address-cells = <1>;
@ -334,7 +342,7 @@ ports {
port@0 {
reg = <0>;
adv7533_in: endpoint {
adv7535_in: endpoint {
remote-endpoint = <&dsi_out>;
};
};
@ -342,7 +350,7 @@ adv7533_in: endpoint {
port@1 {
reg = <1>;
adv7533_out: endpoint {
adv7535_out: endpoint {
remote-endpoint = <&hdmi_connector_in>;
};
};
@ -448,7 +456,7 @@ port@1 {
reg = <1>;
dsi_out: endpoint {
remote-endpoint = <&adv7533_in>;
remote-endpoint = <&adv7535_in>;
data-lanes = <1 2 3 4>;
};
};

View File

@ -381,9 +381,10 @@ &pcie_phy {
&sai3 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_sai3>;
assigned-clocks = <&clk IMX8MP_CLK_SAI3>;
assigned-clocks = <&clk IMX8MP_CLK_SAI3>,
<&clk IMX8MP_AUDIO_PLL2> ;
assigned-clock-parents = <&clk IMX8MP_AUDIO_PLL2_OUT>;
assigned-clock-rates = <12288000>;
assigned-clock-rates = <12288000>, <361267200>;
fsl,sai-mclk-direction-output;
status = "okay";
};

View File

@ -790,6 +790,12 @@ pgc_audio: power-domain@5 {
reg = <IMX8MP_POWER_DOMAIN_AUDIOMIX>;
clocks = <&clk IMX8MP_CLK_AUDIO_ROOT>,
<&clk IMX8MP_CLK_AUDIO_AXI>;
assigned-clocks = <&clk IMX8MP_CLK_AUDIO_AHB>,
<&clk IMX8MP_CLK_AUDIO_AXI_SRC>;
assigned-clock-parents = <&clk IMX8MP_SYS_PLL1_800M>,
<&clk IMX8MP_SYS_PLL1_800M>;
assigned-clock-rates = <400000000>,
<600000000>;
};
pgc_gpu2d: power-domain@6 {

View File

@ -81,7 +81,7 @@ flash0: flash@0 {
&gpio1 {
pmic-irq-hog {
gpio-hog;
gpios = <2 GPIO_ACTIVE_LOW>;
gpios = <3 GPIO_ACTIVE_LOW>;
input;
line-name = "PMIC_IRQ#";
};

View File

@ -2957,7 +2957,7 @@ merge1: vpp-merge@1c10c000 {
clock-names = "merge","merge_async";
power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0xc000 0x1000>;
mediatek,merge-mute = <1>;
mediatek,merge-mute;
resets = <&vdosys1 MT8195_VDOSYS1_SW0_RST_B_MERGE0_DL_ASYNC>;
};
@ -2970,7 +2970,7 @@ merge2: vpp-merge@1c10d000 {
clock-names = "merge","merge_async";
power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0xd000 0x1000>;
mediatek,merge-mute = <1>;
mediatek,merge-mute;
resets = <&vdosys1 MT8195_VDOSYS1_SW0_RST_B_MERGE1_DL_ASYNC>;
};
@ -2983,7 +2983,7 @@ merge3: vpp-merge@1c10e000 {
clock-names = "merge","merge_async";
power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0xe000 0x1000>;
mediatek,merge-mute = <1>;
mediatek,merge-mute;
resets = <&vdosys1 MT8195_VDOSYS1_SW0_RST_B_MERGE2_DL_ASYNC>;
};
@ -2996,7 +2996,7 @@ merge4: vpp-merge@1c10f000 {
clock-names = "merge","merge_async";
power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0xf000 0x1000>;
mediatek,merge-mute = <1>;
mediatek,merge-mute;
resets = <&vdosys1 MT8195_VDOSYS1_SW0_RST_B_MERGE3_DL_ASYNC>;
};
@ -3009,7 +3009,7 @@ merge5: vpp-merge@1c110000 {
clock-names = "merge","merge_async";
power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
mediatek,gce-client-reg = <&gce0 SUBSYS_1c11XXXX 0x0000 0x1000>;
mediatek,merge-fifo-en = <1>;
mediatek,merge-fifo-en;
resets = <&vdosys1 MT8195_VDOSYS1_SW0_RST_B_MERGE4_DL_ASYNC>;
};

View File

@ -636,6 +636,7 @@ CONFIG_POWER_RESET_MSM=y
CONFIG_POWER_RESET_QCOM_PON=m
CONFIG_POWER_RESET_XGENE=y
CONFIG_POWER_RESET_SYSCON=y
CONFIG_POWER_RESET_SYSCON_POWEROFF=y
CONFIG_SYSCON_REBOOT_MODE=y
CONFIG_NVMEM_REBOOT_MODE=m
CONFIG_BATTERY_SBS=m
@ -1175,7 +1176,6 @@ CONFIG_COMMON_CLK_S2MPS11=y
CONFIG_COMMON_CLK_PWM=y
CONFIG_COMMON_CLK_RS9_PCIE=y
CONFIG_COMMON_CLK_VC5=y
CONFIG_COMMON_CLK_NPCM8XX=y
CONFIG_COMMON_CLK_BD718XX=m
CONFIG_CLK_RASPBERRYPI=m
CONFIG_CLK_IMX8MM=y

View File

@ -28,7 +28,7 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags);
#define arch_make_huge_pte arch_make_huge_pte
#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT
extern void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte);
pte_t *ptep, pte_t pte, unsigned long sz);
#define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
extern int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,

View File

@ -241,15 +241,8 @@ static void clear_flush(struct mm_struct *mm,
flush_tlb_range(&vma, saddr, addr);
}
static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry)
{
VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry));
return page_folio(pfn_to_page(swp_offset_pfn(entry)));
}
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte)
pte_t *ptep, pte_t pte, unsigned long sz)
{
size_t pgsize;
int i;
@ -257,13 +250,10 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
unsigned long pfn, dpfn;
pgprot_t hugeprot;
ncontig = num_contig_ptes(sz, &pgsize);
if (!pte_present(pte)) {
struct folio *folio;
folio = hugetlb_swap_entry_to_folio(pte_to_swp_entry(pte));
ncontig = num_contig_ptes(folio_size(folio), &pgsize);
for (i = 0; i < ncontig; i++, ptep++)
for (i = 0; i < ncontig; i++, ptep++, addr += pgsize)
set_pte_at(mm, addr, ptep, pte);
return;
}
@ -273,7 +263,6 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
return;
}
ncontig = find_num_contig(mm, addr, ptep, &pgsize);
pfn = pte_pfn(pte);
dpfn = pgsize >> PAGE_SHIFT;
hugeprot = pte_pgprot(pte);
@ -571,5 +560,7 @@ pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr
void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
pte_t old_pte, pte_t pte)
{
set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
unsigned long psize = huge_page_size(hstate_vma(vma));
set_huge_pte_at(vma->vm_mm, addr, ptep, pte, psize);
}

View File

@ -111,6 +111,15 @@
#define R_LARCH_TLS_GD_HI20 98
#define R_LARCH_32_PCREL 99
#define R_LARCH_RELAX 100
#define R_LARCH_DELETE 101
#define R_LARCH_ALIGN 102
#define R_LARCH_PCREL20_S2 103
#define R_LARCH_CFA 104
#define R_LARCH_ADD6 105
#define R_LARCH_SUB6 106
#define R_LARCH_ADD_ULEB128 107
#define R_LARCH_SUB_ULEB128 108
#define R_LARCH_64_PCREL 109
#ifndef ELF_ARCH

View File

@ -367,6 +367,24 @@ static int apply_r_larch_got_pc(struct module *mod,
return apply_r_larch_pcala(mod, location, got, rela_stack, rela_stack_top, type);
}
static int apply_r_larch_32_pcrel(struct module *mod, u32 *location, Elf_Addr v,
s64 *rela_stack, size_t *rela_stack_top, unsigned int type)
{
ptrdiff_t offset = (void *)v - (void *)location;
*(u32 *)location = offset;
return 0;
}
static int apply_r_larch_64_pcrel(struct module *mod, u32 *location, Elf_Addr v,
s64 *rela_stack, size_t *rela_stack_top, unsigned int type)
{
ptrdiff_t offset = (void *)v - (void *)location;
*(u64 *)location = offset;
return 0;
}
/*
* reloc_handlers_rela() - Apply a particular relocation to a module
* @mod: the module to apply the reloc to
@ -382,7 +400,7 @@ typedef int (*reloc_rela_handler)(struct module *mod, u32 *location, Elf_Addr v,
/* The handlers for known reloc types */
static reloc_rela_handler reloc_rela_handlers[] = {
[R_LARCH_NONE ... R_LARCH_RELAX] = apply_r_larch_error,
[R_LARCH_NONE ... R_LARCH_64_PCREL] = apply_r_larch_error,
[R_LARCH_NONE] = apply_r_larch_none,
[R_LARCH_32] = apply_r_larch_32,
@ -396,6 +414,8 @@ static reloc_rela_handler reloc_rela_handlers[] = {
[R_LARCH_SOP_POP_32_S_10_5 ... R_LARCH_SOP_POP_32_U] = apply_r_larch_sop_imm_field,
[R_LARCH_ADD32 ... R_LARCH_SUB64] = apply_r_larch_add_sub,
[R_LARCH_PCALA_HI20...R_LARCH_PCALA64_HI12] = apply_r_larch_pcala,
[R_LARCH_32_PCREL] = apply_r_larch_32_pcrel,
[R_LARCH_64_PCREL] = apply_r_larch_64_pcrel,
};
int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,

View File

@ -436,7 +436,7 @@ void __init paging_init(void)
void __init mem_init(void)
{
high_memory = (void *) __va(get_num_physpages() << PAGE_SHIFT);
high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT);
memblock_free_all();
}

View File

@ -164,6 +164,7 @@ static struct platform_device db1x00_audio_dev = {
/******************************************************************************/
#ifdef CONFIG_MMC_AU1X
static irqreturn_t db1100_mmc_cd(int irq, void *ptr)
{
mmc_detect_change(ptr, msecs_to_jiffies(500));
@ -369,6 +370,7 @@ static struct platform_device db1100_mmc1_dev = {
.num_resources = ARRAY_SIZE(au1100_mmc1_res),
.resource = au1100_mmc1_res,
};
#endif /* CONFIG_MMC_AU1X */
/******************************************************************************/
@ -440,8 +442,10 @@ static struct platform_device *db1x00_devs[] = {
static struct platform_device *db1100_devs[] = {
&au1100_lcd_device,
#ifdef CONFIG_MMC_AU1X
&db1100_mmc0_dev,
&db1100_mmc1_dev,
#endif
};
int __init db1000_dev_setup(void)

View File

@ -326,6 +326,7 @@ static struct platform_device db1200_ide_dev = {
/**********************************************************************/
#ifdef CONFIG_MMC_AU1X
/* SD carddetects: they're supposed to be edge-triggered, but ack
* doesn't seem to work (CPLD Rev 2). Instead, the screaming one
* is disabled and its counterpart enabled. The 200ms timeout is
@ -584,6 +585,7 @@ static struct platform_device pb1200_mmc1_dev = {
.num_resources = ARRAY_SIZE(au1200_mmc1_res),
.resource = au1200_mmc1_res,
};
#endif /* CONFIG_MMC_AU1X */
/**********************************************************************/
@ -751,7 +753,9 @@ static struct platform_device db1200_audiodma_dev = {
static struct platform_device *db1200_devs[] __initdata = {
NULL, /* PSC0, selected by S6.8 */
&db1200_ide_dev,
#ifdef CONFIG_MMC_AU1X
&db1200_mmc0_dev,
#endif
&au1200_lcd_dev,
&db1200_eth_dev,
&db1200_nand_dev,
@ -762,7 +766,9 @@ static struct platform_device *db1200_devs[] __initdata = {
};
static struct platform_device *pb1200_devs[] __initdata = {
#ifdef CONFIG_MMC_AU1X
&pb1200_mmc1_dev,
#endif
};
/* Some peripheral base addresses differ on the PB1200 */

View File

@ -450,6 +450,7 @@ static struct platform_device db1300_ide_dev = {
/**********************************************************************/
#ifdef CONFIG_MMC_AU1X
static irqreturn_t db1300_mmc_cd(int irq, void *ptr)
{
disable_irq_nosync(irq);
@ -632,6 +633,7 @@ static struct platform_device db1300_sd0_dev = {
.resource = au1300_sd0_res,
.num_resources = ARRAY_SIZE(au1300_sd0_res),
};
#endif /* CONFIG_MMC_AU1X */
/**********************************************************************/
@ -767,8 +769,10 @@ static struct platform_device *db1300_dev[] __initdata = {
&db1300_5waysw_dev,
&db1300_nand_dev,
&db1300_ide_dev,
#ifdef CONFIG_MMC_AU1X
&db1300_sd0_dev,
&db1300_sd1_dev,
#endif
&db1300_lcd_dev,
&db1300_ac97_dev,
&db1300_i2s_dev,

View File

@ -6,7 +6,7 @@
#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte);
pte_t *ptep, pte_t pte, unsigned long sz);
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,

View File

@ -140,7 +140,7 @@ static void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
}
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t entry)
pte_t *ptep, pte_t entry, unsigned long sz)
{
__set_huge_pte_at(mm, addr, ptep, entry);
}

View File

@ -46,7 +46,8 @@ static inline int check_and_get_huge_psize(int shift)
}
#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte);
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
pte_t pte, unsigned long sz);
#define __HAVE_ARCH_HUGE_PTE_CLEAR
static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,

View File

@ -73,29 +73,12 @@ int __no_sanitize_address arch_stack_walk_reliable(stack_trace_consume_fn consum
bool firstframe;
stack_end = stack_page + THREAD_SIZE;
if (!is_idle_task(task)) {
/*
* For user tasks, this is the SP value loaded on
* kernel entry, see "PACAKSAVE(r13)" in _switch() and
* system_call_common().
*
* Likewise for non-swapper kernel threads,
* this also happens to be the top of the stack
* as setup by copy_thread().
*
* Note that stack backlinks are not properly setup by
* copy_thread() and thus, a forked task() will have
* an unreliable stack trace until it's been
* _switch()'ed to for the first time.
*/
stack_end -= STACK_USER_INT_FRAME_SIZE;
} else {
/*
* idle tasks have a custom stack layout,
* c.f. cpu_idle_thread_init().
*/
// See copy_thread() for details.
if (task->flags & PF_KTHREAD)
stack_end -= STACK_FRAME_MIN_SIZE;
}
else
stack_end -= STACK_USER_INT_FRAME_SIZE;
if (task == current)
sp = current_stack_frame();

View File

@ -143,11 +143,14 @@ pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr,
pte_t *ptep, pte_t old_pte, pte_t pte)
{
unsigned long psize;
if (radix_enabled())
return radix__huge_ptep_modify_prot_commit(vma, addr, ptep,
old_pte, pte);
set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
psize = huge_page_size(hstate_vma(vma));
set_huge_pte_at(vma->vm_mm, addr, ptep, pte, psize);
}
void __init hugetlbpage_init_defaultsize(void)

View File

@ -47,6 +47,7 @@ void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
pte_t old_pte, pte_t pte)
{
struct mm_struct *mm = vma->vm_mm;
unsigned long psize = huge_page_size(hstate_vma(vma));
/*
* POWER9 NMMU must flush the TLB after clearing the PTE before
@ -58,5 +59,5 @@ void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
atomic_read(&mm->context.copros) > 0)
radix__flush_hugetlb_page(vma, addr);
set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
set_huge_pte_at(vma->vm_mm, addr, ptep, pte, psize);
}

View File

@ -91,7 +91,8 @@ static int __ref __early_map_kernel_hugepage(unsigned long va, phys_addr_t pa,
if (new && WARN_ON(pte_present(*ptep) && pgprot_val(prot)))
return -EINVAL;
set_huge_pte_at(&init_mm, va, ptep, pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot)));
set_huge_pte_at(&init_mm, va, ptep,
pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot)), psize);
return 0;
}

View File

@ -288,7 +288,8 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
}
#if defined(CONFIG_PPC_8xx)
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte)
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
pte_t pte, unsigned long sz)
{
pmd_t *pmd = pmd_off(mm, addr);
pte_basic_t val;

View File

@ -262,7 +262,7 @@ uboot@100000 {
reg = <0x100000 0x400000>;
};
reserved-data@600000 {
reg = <0x600000 0x1000000>;
reg = <0x600000 0xa00000>;
};
};
};
@ -440,30 +440,6 @@ GPOEN_ENABLE,
};
};
uart0_pins: uart0-0 {
tx-pins {
pinmux = <GPIOMUX(5, GPOUT_SYS_UART0_TX,
GPOEN_ENABLE,
GPI_NONE)>;
bias-disable;
drive-strength = <12>;
input-disable;
input-schmitt-disable;
slew-rate = <0>;
};
rx-pins {
pinmux = <GPIOMUX(6, GPOUT_LOW,
GPOEN_DISABLE,
GPI_SYS_UART0_RX)>;
bias-disable; /* external pull-up */
drive-strength = <2>;
input-enable;
input-schmitt-enable;
slew-rate = <0>;
};
};
tdm_pins: tdm-0 {
tx-pins {
pinmux = <GPIOMUX(44, GPOUT_SYS_TDM_TXD,
@ -497,6 +473,30 @@ GPOEN_DISABLE,
input-enable;
};
};
uart0_pins: uart0-0 {
tx-pins {
pinmux = <GPIOMUX(5, GPOUT_SYS_UART0_TX,
GPOEN_ENABLE,
GPI_NONE)>;
bias-disable;
drive-strength = <12>;
input-disable;
input-schmitt-disable;
slew-rate = <0>;
};
rx-pins {
pinmux = <GPIOMUX(6, GPOUT_LOW,
GPOEN_DISABLE,
GPI_SYS_UART0_RX)>;
bias-disable; /* external pull-up */
drive-strength = <2>;
input-enable;
input-schmitt-enable;
slew-rate = <0>;
};
};
};
&tdm {
@ -513,6 +513,7 @@ &uart0 {
&usb0 {
dr_mode = "peripheral";
status = "okay";
};
&U74_1 {

View File

@ -18,7 +18,8 @@ void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT
void set_huge_pte_at(struct mm_struct *mm,
unsigned long addr, pte_t *ptep, pte_t pte);
unsigned long addr, pte_t *ptep, pte_t pte,
unsigned long sz);
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
pte_t huge_ptep_get_and_clear(struct mm_struct *mm,

View File

@ -180,7 +180,8 @@ pte_t arch_make_huge_pte(pte_t entry, unsigned int shift, vm_flags_t flags)
void set_huge_pte_at(struct mm_struct *mm,
unsigned long addr,
pte_t *ptep,
pte_t pte)
pte_t pte,
unsigned long sz)
{
int i, pte_num;

View File

@ -16,6 +16,8 @@
#define hugepages_supported() (MACHINE_HAS_EDAT1)
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte, unsigned long sz);
void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte);
pte_t huge_ptep_get(pte_t *ptep);
pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
@ -65,7 +67,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
int changed = !pte_same(huge_ptep_get(ptep), pte);
if (changed) {
huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
__set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
}
return changed;
}
@ -74,7 +76,7 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
pte_t pte = huge_ptep_get_and_clear(mm, addr, ptep);
set_huge_pte_at(mm, addr, ptep, pte_wrprotect(pte));
__set_huge_pte_at(mm, addr, ptep, pte_wrprotect(pte));
}
static inline pte_t mk_huge_pte(struct page *page, pgprot_t pgprot)

View File

@ -142,7 +142,7 @@ static void clear_huge_pte_skeys(struct mm_struct *mm, unsigned long rste)
__storage_key_init_range(paddr, paddr + size - 1);
}
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte)
{
unsigned long rste;
@ -163,6 +163,12 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
set_pte(ptep, __pte(rste));
}
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte, unsigned long sz)
{
__set_huge_pte_at(mm, addr, ptep, pte);
}
pte_t huge_ptep_get(pte_t *ptep)
{
return __rste_to_pte(pte_val(*ptep));

View File

@ -14,6 +14,8 @@ extern struct pud_huge_patch_entry __pud_huge_patch, __pud_huge_patch_end;
#define __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte, unsigned long sz);
void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte);
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
@ -32,7 +34,7 @@ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pte_t *ptep)
{
pte_t old_pte = *ptep;
set_huge_pte_at(mm, addr, ptep, pte_wrprotect(old_pte));
__set_huge_pte_at(mm, addr, ptep, pte_wrprotect(old_pte));
}
#define __HAVE_ARCH_HUGE_PTEP_SET_ACCESS_FLAGS
@ -42,7 +44,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
{
int changed = !pte_same(*ptep, pte);
if (changed) {
set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
__set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
flush_tlb_page(vma, addr);
}
return changed;

View File

@ -328,7 +328,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm,
return pte_offset_huge(pmd, addr);
}
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t entry)
{
unsigned int nptes, orig_shift, shift;
@ -364,6 +364,12 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
orig_shift);
}
void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t entry, unsigned long sz)
{
__set_huge_pte_at(mm, addr, ptep, entry);
}
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep)
{

View File

@ -534,8 +534,12 @@ static void amd_pmu_cpu_reset(int cpu)
/* Clear enable bits i.e. PerfCntrGlobalCtl.PerfCntrEn */
wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, 0);
/* Clear overflow bits i.e. PerfCntrGLobalStatus.PerfCntrOvfl */
wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, amd_pmu_global_cntr_mask);
/*
* Clear freeze and overflow bits i.e. PerfCntrGLobalStatus.LbrFreeze
* and PerfCntrGLobalStatus.PerfCntrOvfl
*/
wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR,
GLOBAL_STATUS_LBRS_FROZEN | amd_pmu_global_cntr_mask);
}
static int amd_pmu_cpu_prepare(int cpu)
@ -570,6 +574,7 @@ static void amd_pmu_cpu_starting(int cpu)
int i, nb_id;
cpuc->perf_ctr_virt_mask = AMD64_EVENTSEL_HOSTONLY;
amd_pmu_cpu_reset(cpu);
if (!x86_pmu.amd_nb_constraints)
return;
@ -591,8 +596,6 @@ static void amd_pmu_cpu_starting(int cpu)
cpuc->amd_nb->nb_id = nb_id;
cpuc->amd_nb->refcnt++;
amd_pmu_cpu_reset(cpu);
}
static void amd_pmu_cpu_dead(int cpu)
@ -601,6 +604,7 @@ static void amd_pmu_cpu_dead(int cpu)
kfree(cpuhw->lbr_sel);
cpuhw->lbr_sel = NULL;
amd_pmu_cpu_reset(cpu);
if (!x86_pmu.amd_nb_constraints)
return;
@ -613,8 +617,6 @@ static void amd_pmu_cpu_dead(int cpu)
cpuhw->amd_nb = NULL;
}
amd_pmu_cpu_reset(cpu);
}
static inline void amd_pmu_set_global_ctl(u64 ctl)
@ -884,7 +886,7 @@ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
struct hw_perf_event *hwc;
struct perf_event *event;
int handled = 0, idx;
u64 status, mask;
u64 reserved, status, mask;
bool pmu_enabled;
/*
@ -909,6 +911,14 @@ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
status &= ~GLOBAL_STATUS_LBRS_FROZEN;
}
reserved = status & ~amd_pmu_global_cntr_mask;
if (reserved)
pr_warn_once("Reserved PerfCntrGlobalStatus bits are set (0x%llx), please consider updating microcode\n",
reserved);
/* Clear any reserved bits set by buggy microcode */
status &= amd_pmu_global_cntr_mask;
for (idx = 0; idx < x86_pmu.num_counters; idx++) {
if (!test_bit(idx, cpuc->active_mask))
continue;

View File

@ -955,6 +955,14 @@ static inline int pte_same(pte_t a, pte_t b)
return a.pte == b.pte;
}
static inline pte_t pte_next_pfn(pte_t pte)
{
if (__pte_needs_invert(pte_val(pte)))
return __pte(pte_val(pte) - (1UL << PFN_PTE_SHIFT));
return __pte(pte_val(pte) + (1UL << PFN_PTE_SHIFT));
}
#define pte_next_pfn pte_next_pfn
static inline int pte_present(pte_t a)
{
return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE);

View File

@ -1303,7 +1303,7 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
VULNBL_AMD(0x15, RETBLEED),
VULNBL_AMD(0x16, RETBLEED),
VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO),
VULNBL_HYGON(0x18, RETBLEED | SMT_RSB),
VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO),
VULNBL_AMD(0x19, SRSO),
{}
};

View File

@ -235,6 +235,21 @@ static struct sgx_epc_page *sgx_encl_eldu(struct sgx_encl_page *encl_page,
return epc_page;
}
/*
* Ensure the SECS page is not swapped out. Must be called with encl->lock
* to protect the enclave states including SECS and ensure the SECS page is
* not swapped out again while being used.
*/
static struct sgx_epc_page *sgx_encl_load_secs(struct sgx_encl *encl)
{
struct sgx_epc_page *epc_page = encl->secs.epc_page;
if (!epc_page)
epc_page = sgx_encl_eldu(&encl->secs, NULL);
return epc_page;
}
static struct sgx_encl_page *__sgx_encl_load_page(struct sgx_encl *encl,
struct sgx_encl_page *entry)
{
@ -248,11 +263,9 @@ static struct sgx_encl_page *__sgx_encl_load_page(struct sgx_encl *encl,
return entry;
}
if (!(encl->secs.epc_page)) {
epc_page = sgx_encl_eldu(&encl->secs, NULL);
if (IS_ERR(epc_page))
return ERR_CAST(epc_page);
}
epc_page = sgx_encl_load_secs(encl);
if (IS_ERR(epc_page))
return ERR_CAST(epc_page);
epc_page = sgx_encl_eldu(entry, encl->secs.epc_page);
if (IS_ERR(epc_page))
@ -339,6 +352,13 @@ static vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma,
mutex_lock(&encl->lock);
epc_page = sgx_encl_load_secs(encl);
if (IS_ERR(epc_page)) {
if (PTR_ERR(epc_page) == -EBUSY)
vmret = VM_FAULT_NOPAGE;
goto err_out_unlock;
}
epc_page = sgx_alloc_epc_page(encl_page, false);
if (IS_ERR(epc_page)) {
if (PTR_ERR(epc_page) == -EBUSY)

View File

@ -695,7 +695,6 @@ void kgdb_arch_exit(void)
}
/**
*
* kgdb_skipexception - Bail out of KGDB when we've been triggered.
* @exception: Exception vector number
* @regs: Current &struct pt_regs.

View File

@ -9,8 +9,7 @@
# KBUILD_CFLAGS used when building rest of boot (takes effect recursively)
KBUILD_CFLAGS += -fno-builtin -Iarch/$(ARCH)/boot/include
HOSTFLAGS += -Iarch/$(ARCH)/boot/include
KBUILD_CFLAGS += -fno-builtin
subdir-y := lib
targets += vmlinux.bin vmlinux.bin.gz

View File

@ -4,13 +4,14 @@
/* bits taken from ppc */
extern void *avail_ram, *end_avail;
void gunzip(void *dst, int dstlen, unsigned char *src, int *lenp);
void exit (void)
static void exit(void)
{
for (;;);
}
void *zalloc(unsigned size)
static void *zalloc(unsigned int size)
{
void *p = avail_ram;

View File

@ -6,6 +6,10 @@
#include <variant/core.h>
#ifndef XCHAL_HAVE_DIV32
#define XCHAL_HAVE_DIV32 0
#endif
#ifndef XCHAL_HAVE_EXCLUSIVE
#define XCHAL_HAVE_EXCLUSIVE 0
#endif

View File

@ -48,6 +48,7 @@ void arch_uninstall_hw_breakpoint(struct perf_event *bp);
void hw_breakpoint_pmu_read(struct perf_event *bp);
int check_hw_breakpoint(struct pt_regs *regs);
void clear_ptrace_hw_breakpoint(struct task_struct *tsk);
void restore_dbreak(void);
#else

View File

@ -14,6 +14,8 @@
#include <linux/compiler.h>
#include <linux/stringify.h>
#include <asm/bootparam.h>
#include <asm/ptrace.h>
#include <asm/types.h>
#include <asm/regs.h>
@ -217,6 +219,9 @@ struct mm_struct;
extern unsigned long __get_wchan(struct task_struct *p);
void init_arch(bp_tag_t *bp_start);
void do_notify_resume(struct pt_regs *regs);
#define KSTK_EIP(tsk) (task_pt_regs(tsk)->pc)
#define KSTK_ESP(tsk) (task_pt_regs(tsk)->areg[1])

View File

@ -106,6 +106,9 @@ static inline unsigned long regs_return_value(struct pt_regs *regs)
return regs->areg[2];
}
int do_syscall_trace_enter(struct pt_regs *regs);
void do_syscall_trace_leave(struct pt_regs *regs);
#else /* __ASSEMBLY__ */
# include <asm/asm-offsets.h>

View File

@ -23,6 +23,7 @@ struct cpumask;
void arch_send_call_function_ipi_mask(const struct cpumask *mask);
void arch_send_call_function_single_ipi(int cpu);
void secondary_start_kernel(void);
void smp_init_cpus(void);
void secondary_init_irq(void);
void ipi_init(void);

View File

@ -18,4 +18,6 @@
#define __pte_free_tlb(tlb, pte, address) pte_free((tlb)->mm, pte)
void check_tlb_sanity(void);
#endif /* _XTENSA_TLB_H */

View File

@ -13,6 +13,7 @@
#include <linux/percpu.h>
#include <linux/perf_event.h>
#include <asm/core.h>
#include <asm/hw_breakpoint.h>
/* Breakpoint currently in use for each IBREAKA. */
static DEFINE_PER_CPU(struct perf_event *, bp_on_reg[XCHAL_NUM_IBREAK]);

View File

@ -28,6 +28,7 @@
#include <asm/mxregs.h>
#include <linux/uaccess.h>
#include <asm/platform.h>
#include <asm/traps.h>
DECLARE_PER_CPU(unsigned long, nmi_count);

View File

@ -541,7 +541,6 @@ long arch_ptrace(struct task_struct *child, long request,
return ret;
}
void do_syscall_trace_leave(struct pt_regs *regs);
int do_syscall_trace_enter(struct pt_regs *regs)
{
if (regs->syscall == NO_SYSCALL)

View File

@ -26,6 +26,8 @@
#include <linux/uaccess.h>
#include <asm/cacheflush.h>
#include <asm/coprocessor.h>
#include <asm/processor.h>
#include <asm/syscall.h>
#include <asm/unistd.h>
extern struct task_struct *coproc_owners[];

View File

@ -21,6 +21,7 @@
#include <linux/irq.h>
#include <linux/kdebug.h>
#include <linux/module.h>
#include <linux/profile.h>
#include <linux/sched/mm.h>
#include <linux/sched/hotplug.h>
#include <linux/sched/task_stack.h>

View File

@ -12,6 +12,7 @@
#include <linux/sched.h>
#include <linux/stacktrace.h>
#include <asm/ftrace.h>
#include <asm/stacktrace.h>
#include <asm/traps.h>
#include <linux/uaccess.h>

View File

@ -23,6 +23,7 @@
* for more details.
*/
#include <linux/cpu.h>
#include <linux/kernel.h>
#include <linux/sched/signal.h>
#include <linux/sched/debug.h>

View File

@ -3,7 +3,9 @@
#include <asm/asmmacro.h>
#include <asm/core.h>
#if !XCHAL_HAVE_MUL16 && !XCHAL_HAVE_MUL32 && !XCHAL_HAVE_MAC16
#if XCHAL_HAVE_MUL16 || XCHAL_HAVE_MUL32 || XCHAL_HAVE_MAC16
#define XCHAL_NO_MUL 0
#else
#define XCHAL_NO_MUL 1
#endif

View File

@ -20,6 +20,7 @@
#include <asm/mmu_context.h>
#include <asm/cacheflush.h>
#include <asm/hardirq.h>
#include <asm/traps.h>
void bad_page_fault(struct pt_regs*, unsigned long, int);

View File

@ -17,6 +17,7 @@
#include <linux/mm.h>
#include <asm/processor.h>
#include <asm/mmu_context.h>
#include <asm/tlb.h>
#include <asm/tlbflush.h>
#include <asm/cacheflush.h>

View File

@ -201,7 +201,7 @@ static int tuntap_write(struct iss_net_private *lp, struct sk_buff **skb)
return simc_write(lp->tp.info.tuntap.fd, (*skb)->data, (*skb)->len);
}
unsigned short tuntap_protocol(struct sk_buff *skb)
static unsigned short tuntap_protocol(struct sk_buff *skb)
{
return eth_type_trans(skb, skb->dev);
}
@ -441,7 +441,7 @@ static int iss_net_change_mtu(struct net_device *dev, int new_mtu)
return -EINVAL;
}
void iss_net_user_timer_expire(struct timer_list *unused)
static void iss_net_user_timer_expire(struct timer_list *unused)
{
}

View File

@ -270,7 +270,7 @@ void rq_qos_wait(struct rq_wait *rqw, void *private_data,
finish_wait(&rqw->wait, &data.wq);
/*
* We raced with wbt_wake_function() getting a token,
* We raced with rq_qos_wake_function() getting a token,
* which means we now have two. Put our local token
* and wake anyone else potentially waiting for one.
*/

View File

@ -290,7 +290,6 @@ EXPORT_SYMBOL(disk_check_media_change);
/**
* disk_force_media_change - force a media change event
* @disk: the disk which will raise the event
* @events: the events to raise
*
* Should be called when the media changes for @disk. Generates a uevent
* and attempts to free all dentries and inodes and invalidates all block

View File

@ -327,7 +327,7 @@ static int ivpu_wait_for_ready(struct ivpu_device *vdev)
}
if (!ret)
ivpu_info(vdev, "VPU ready message received successfully\n");
ivpu_dbg(vdev, PM, "VPU ready message received successfully\n");
else
ivpu_hw_diagnose_failure(vdev);
@ -634,6 +634,7 @@ static void ivpu_dev_fini(struct ivpu_device *vdev)
static struct pci_device_id ivpu_pci_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_MTL) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_ARL) },
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_LNL) },
{ }
};

View File

@ -23,6 +23,7 @@
#define DRIVER_DATE "20230117"
#define PCI_DEVICE_ID_MTL 0x7d1d
#define PCI_DEVICE_ID_ARL 0xad1d
#define PCI_DEVICE_ID_LNL 0x643e
#define IVPU_HW_37XX 37
@ -165,6 +166,7 @@ static inline int ivpu_hw_gen(struct ivpu_device *vdev)
{
switch (ivpu_device_id(vdev)) {
case PCI_DEVICE_ID_MTL:
case PCI_DEVICE_ID_ARL:
return IVPU_HW_37XX;
case PCI_DEVICE_ID_LNL:
return IVPU_HW_40XX;

View File

@ -220,7 +220,8 @@ static int ivpu_fw_mem_init(struct ivpu_device *vdev)
if (ret)
return ret;
fw->mem = ivpu_bo_alloc_internal(vdev, fw->runtime_addr, fw->runtime_size, DRM_IVPU_BO_WC);
fw->mem = ivpu_bo_alloc_internal(vdev, fw->runtime_addr, fw->runtime_size,
DRM_IVPU_BO_CACHED | DRM_IVPU_BO_NOSNOOP);
if (!fw->mem) {
ivpu_err(vdev, "Failed to allocate firmware runtime memory\n");
return -ENOMEM;
@ -330,7 +331,7 @@ int ivpu_fw_load(struct ivpu_device *vdev)
memset(start, 0, size);
}
wmb(); /* Flush WC buffers after writing fw->mem */
clflush_cache_range(fw->mem->kvaddr, fw->mem->base.size);
return 0;
}
@ -432,6 +433,7 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params
if (!ivpu_fw_is_cold_boot(vdev)) {
boot_params->save_restore_ret_address = 0;
vdev->pm->is_warmboot = true;
clflush_cache_range(vdev->fw->mem->kvaddr, SZ_4K);
return;
}
@ -493,7 +495,7 @@ void ivpu_fw_boot_params_setup(struct ivpu_device *vdev, struct vpu_boot_params
boot_params->punit_telemetry_sram_size = ivpu_hw_reg_telemetry_size_get(vdev);
boot_params->vpu_telemetry_enable = ivpu_hw_reg_telemetry_enable_get(vdev);
wmb(); /* Flush WC buffers after writing bootparams */
clflush_cache_range(vdev->fw->mem->kvaddr, SZ_4K);
ivpu_fw_boot_params_print(vdev, boot_params);
}

View File

@ -8,6 +8,8 @@
#include <drm/drm_gem.h>
#include <drm/drm_mm.h>
#define DRM_IVPU_BO_NOSNOOP 0x10000000
struct dma_buf;
struct ivpu_bo_ops;
struct ivpu_file_priv;
@ -83,6 +85,9 @@ static inline u32 ivpu_bo_cache_mode(struct ivpu_bo *bo)
static inline bool ivpu_bo_is_snooped(struct ivpu_bo *bo)
{
if (bo->flags & DRM_IVPU_BO_NOSNOOP)
return false;
return ivpu_bo_cache_mode(bo) == DRM_IVPU_BO_CACHED;
}

View File

@ -57,8 +57,7 @@
#define ICB_0_1_IRQ_MASK ((((u64)ICB_1_IRQ_MASK) << 32) | ICB_0_IRQ_MASK)
#define BUTTRESS_IRQ_MASK ((REG_FLD(VPU_40XX_BUTTRESS_INTERRUPT_STAT, FREQ_CHANGE)) | \
(REG_FLD(VPU_40XX_BUTTRESS_INTERRUPT_STAT, ATS_ERR)) | \
#define BUTTRESS_IRQ_MASK ((REG_FLD(VPU_40XX_BUTTRESS_INTERRUPT_STAT, ATS_ERR)) | \
(REG_FLD(VPU_40XX_BUTTRESS_INTERRUPT_STAT, CFI0_ERR)) | \
(REG_FLD(VPU_40XX_BUTTRESS_INTERRUPT_STAT, CFI1_ERR)) | \
(REG_FLD(VPU_40XX_BUTTRESS_INTERRUPT_STAT, IMR0_ERR)) | \
@ -196,6 +195,14 @@ static int ivpu_pll_wait_for_status_ready(struct ivpu_device *vdev)
return REGB_POLL_FLD(VPU_40XX_BUTTRESS_VPU_STATUS, READY, 1, PLL_TIMEOUT_US);
}
static int ivpu_wait_for_clock_own_resource_ack(struct ivpu_device *vdev)
{
if (ivpu_is_simics(vdev))
return 0;
return REGB_POLL_FLD(VPU_40XX_BUTTRESS_VPU_STATUS, CLOCK_RESOURCE_OWN_ACK, 1, TIMEOUT_US);
}
static void ivpu_pll_init_frequency_ratios(struct ivpu_device *vdev)
{
struct ivpu_hw_info *hw = vdev->hw;
@ -556,6 +563,12 @@ static int ivpu_boot_pwr_domain_enable(struct ivpu_device *vdev)
{
int ret;
ret = ivpu_wait_for_clock_own_resource_ack(vdev);
if (ret) {
ivpu_err(vdev, "Timed out waiting for clock own resource ACK\n");
return ret;
}
ivpu_boot_pwr_island_trickle_drive(vdev, true);
ivpu_boot_pwr_island_drive(vdev, true);
@ -1046,9 +1059,6 @@ static irqreturn_t ivpu_hw_40xx_irqb_handler(struct ivpu_device *vdev, int irq)
if (status == 0)
return IRQ_NONE;
/* Disable global interrupt before handling local buttress interrupts */
REGB_WR32(VPU_40XX_BUTTRESS_GLOBAL_INT_MASK, 0x1);
if (REG_TEST_FLD(VPU_40XX_BUTTRESS_INTERRUPT_STAT, FREQ_CHANGE, status))
ivpu_dbg(vdev, IRQ, "FREQ_CHANGE");
@ -1096,9 +1106,6 @@ static irqreturn_t ivpu_hw_40xx_irqb_handler(struct ivpu_device *vdev, int irq)
/* This must be done after interrupts are cleared at the source. */
REGB_WR32(VPU_40XX_BUTTRESS_INTERRUPT_STAT, status);
/* Re-enable global interrupt */
REGB_WR32(VPU_40XX_BUTTRESS_GLOBAL_INT_MASK, 0x0);
if (schedule_recovery)
ivpu_pm_schedule_recovery(vdev);
@ -1110,9 +1117,14 @@ static irqreturn_t ivpu_hw_40xx_irq_handler(int irq, void *ptr)
struct ivpu_device *vdev = ptr;
irqreturn_t ret = IRQ_NONE;
REGB_WR32(VPU_40XX_BUTTRESS_GLOBAL_INT_MASK, 0x1);
ret |= ivpu_hw_40xx_irqv_handler(vdev, irq);
ret |= ivpu_hw_40xx_irqb_handler(vdev, irq);
/* Re-enable global interrupts to re-trigger MSI for pending interrupts */
REGB_WR32(VPU_40XX_BUTTRESS_GLOBAL_INT_MASK, 0x0);
if (ret & IRQ_WAKE_THREAD)
return IRQ_WAKE_THREAD;

View File

@ -70,6 +70,8 @@
#define VPU_40XX_BUTTRESS_VPU_STATUS_READY_MASK BIT_MASK(0)
#define VPU_40XX_BUTTRESS_VPU_STATUS_IDLE_MASK BIT_MASK(1)
#define VPU_40XX_BUTTRESS_VPU_STATUS_DUP_IDLE_MASK BIT_MASK(2)
#define VPU_40XX_BUTTRESS_VPU_STATUS_CLOCK_RESOURCE_OWN_ACK_MASK BIT_MASK(6)
#define VPU_40XX_BUTTRESS_VPU_STATUS_POWER_RESOURCE_OWN_ACK_MASK BIT_MASK(7)
#define VPU_40XX_BUTTRESS_VPU_STATUS_PERF_CLK_MASK BIT_MASK(11)
#define VPU_40XX_BUTTRESS_VPU_STATUS_DISABLE_CLK_RELINQUISH_MASK BIT_MASK(12)

View File

@ -209,10 +209,10 @@ int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
struct ivpu_ipc_rx_msg *rx_msg;
int wait_ret, ret = 0;
wait_ret = wait_event_interruptible_timeout(cons->rx_msg_wq,
(IS_KTHREAD() && kthread_should_stop()) ||
!list_empty(&cons->rx_msg_list),
msecs_to_jiffies(timeout_ms));
wait_ret = wait_event_timeout(cons->rx_msg_wq,
(IS_KTHREAD() && kthread_should_stop()) ||
!list_empty(&cons->rx_msg_list),
msecs_to_jiffies(timeout_ms));
if (IS_KTHREAD() && kthread_should_stop())
return -EINTR;
@ -220,9 +220,6 @@ int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
if (wait_ret == 0)
return -ETIMEDOUT;
if (wait_ret < 0)
return -ERESTARTSYS;
spin_lock_irq(&cons->rx_msg_lock);
rx_msg = list_first_entry_or_null(&cons->rx_msg_list, struct ivpu_ipc_rx_msg, link);
if (!rx_msg) {

View File

@ -2057,7 +2057,9 @@ static int acpi_video_bus_add(struct acpi_device *device)
!auto_detect)
acpi_video_bus_register_backlight(video);
acpi_video_bus_add_notify_handler(video);
error = acpi_video_bus_add_notify_handler(video);
if (error)
goto err_del;
error = acpi_dev_install_notify_handler(device, ACPI_DEVICE_NOTIFY,
acpi_video_bus_notify);
@ -2067,10 +2069,11 @@ static int acpi_video_bus_add(struct acpi_device *device)
return 0;
err_remove:
acpi_video_bus_remove_notify_handler(video);
err_del:
mutex_lock(&video_list_lock);
list_del(&video->entry);
mutex_unlock(&video_list_lock);
acpi_video_bus_remove_notify_handler(video);
acpi_video_bus_unregister_backlight(video);
err_put_video:
acpi_video_bus_put_devices(video);

View File

@ -1972,6 +1972,96 @@ int ata_dev_read_id(struct ata_device *dev, unsigned int *p_class,
return rc;
}
/**
* ata_dev_power_set_standby - Set a device power mode to standby
* @dev: target device
*
* Issue a STANDBY IMMEDIATE command to set a device power mode to standby.
* For an HDD device, this spins down the disks.
*
* LOCKING:
* Kernel thread context (may sleep).
*/
void ata_dev_power_set_standby(struct ata_device *dev)
{
unsigned long ap_flags = dev->link->ap->flags;
struct ata_taskfile tf;
unsigned int err_mask;
/* Issue STANDBY IMMEDIATE command only if supported by the device */
if (dev->class != ATA_DEV_ATA && dev->class != ATA_DEV_ZAC)
return;
/*
* Some odd clown BIOSes issue spindown on power off (ACPI S4 or S5)
* causing some drives to spin up and down again. For these, do nothing
* if we are being called on shutdown.
*/
if ((ap_flags & ATA_FLAG_NO_POWEROFF_SPINDOWN) &&
system_state == SYSTEM_POWER_OFF)
return;
if ((ap_flags & ATA_FLAG_NO_HIBERNATE_SPINDOWN) &&
system_entering_hibernation())
return;
ata_tf_init(dev, &tf);
tf.flags |= ATA_TFLAG_DEVICE | ATA_TFLAG_ISADDR;
tf.protocol = ATA_PROT_NODATA;
tf.command = ATA_CMD_STANDBYNOW1;
ata_dev_notice(dev, "Entering standby power mode\n");
err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, 0);
if (err_mask)
ata_dev_err(dev, "STANDBY IMMEDIATE failed (err_mask=0x%x)\n",
err_mask);
}
/**
* ata_dev_power_set_active - Set a device power mode to active
* @dev: target device
*
* Issue a VERIFY command to enter to ensure that the device is in the
* active power mode. For a spun-down HDD (standby or idle power mode),
* the VERIFY command will complete after the disk spins up.
*
* LOCKING:
* Kernel thread context (may sleep).
*/
void ata_dev_power_set_active(struct ata_device *dev)
{
struct ata_taskfile tf;
unsigned int err_mask;
/*
* Issue READ VERIFY SECTORS command for 1 sector at lba=0 only
* if supported by the device.
*/
if (dev->class != ATA_DEV_ATA && dev->class != ATA_DEV_ZAC)
return;
ata_tf_init(dev, &tf);
tf.flags |= ATA_TFLAG_DEVICE | ATA_TFLAG_ISADDR;
tf.protocol = ATA_PROT_NODATA;
tf.command = ATA_CMD_VERIFY;
tf.nsect = 1;
if (dev->flags & ATA_DFLAG_LBA) {
tf.flags |= ATA_TFLAG_LBA;
tf.device |= ATA_LBA;
} else {
/* CHS */
tf.lbal = 0x1; /* sect */
}
ata_dev_notice(dev, "Entering active power mode\n");
err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, 0);
if (err_mask)
ata_dev_err(dev, "VERIFY failed (err_mask=0x%x)\n",
err_mask);
}
/**
* ata_read_log_page - read a specific log page
* @dev: target device
@ -2529,7 +2619,7 @@ static int ata_dev_config_lba(struct ata_device *dev)
{
const u16 *id = dev->id;
const char *lba_desc;
char ncq_desc[24];
char ncq_desc[32];
int ret;
dev->flags |= ATA_DFLAG_LBA;
@ -5037,17 +5127,19 @@ static void ata_port_request_pm(struct ata_port *ap, pm_message_t mesg,
struct ata_link *link;
unsigned long flags;
/* Previous resume operation might still be in
* progress. Wait for PM_PENDING to clear.
*/
if (ap->pflags & ATA_PFLAG_PM_PENDING) {
ata_port_wait_eh(ap);
WARN_ON(ap->pflags & ATA_PFLAG_PM_PENDING);
}
/* request PM ops to EH */
spin_lock_irqsave(ap->lock, flags);
/*
* A previous PM operation might still be in progress. Wait for
* ATA_PFLAG_PM_PENDING to clear.
*/
if (ap->pflags & ATA_PFLAG_PM_PENDING) {
spin_unlock_irqrestore(ap->lock, flags);
ata_port_wait_eh(ap);
spin_lock_irqsave(ap->lock, flags);
}
/* Request PM operation to EH */
ap->pm_mesg = mesg;
ap->pflags |= ATA_PFLAG_PM_PENDING;
ata_for_each_link(link, ap, HOST_FIRST) {
@ -5059,10 +5151,8 @@ static void ata_port_request_pm(struct ata_port *ap, pm_message_t mesg,
spin_unlock_irqrestore(ap->lock, flags);
if (!async) {
if (!async)
ata_port_wait_eh(ap);
WARN_ON(ap->pflags & ATA_PFLAG_PM_PENDING);
}
}
/*
@ -5078,11 +5168,27 @@ static const unsigned int ata_port_suspend_ehi = ATA_EHI_QUIET
static void ata_port_suspend(struct ata_port *ap, pm_message_t mesg)
{
/*
* We are about to suspend the port, so we do not care about
* scsi_rescan_device() calls scheduled by previous resume operations.
* The next resume will schedule the rescan again. So cancel any rescan
* that is not done yet.
*/
cancel_delayed_work_sync(&ap->scsi_rescan_task);
ata_port_request_pm(ap, mesg, 0, ata_port_suspend_ehi, false);
}
static void ata_port_suspend_async(struct ata_port *ap, pm_message_t mesg)
{
/*
* We are about to suspend the port, so we do not care about
* scsi_rescan_device() calls scheduled by previous resume operations.
* The next resume will schedule the rescan again. So cancel any rescan
* that is not done yet.
*/
cancel_delayed_work_sync(&ap->scsi_rescan_task);
ata_port_request_pm(ap, mesg, 0, ata_port_suspend_ehi, true);
}
@ -5229,7 +5335,7 @@ EXPORT_SYMBOL_GPL(ata_host_resume);
#endif
const struct device_type ata_port_type = {
.name = "ata_port",
.name = ATA_PORT_TYPE_NAME,
#ifdef CONFIG_PM
.pm = &ata_port_pm_ops,
#endif
@ -5948,11 +6054,30 @@ static void ata_port_detach(struct ata_port *ap)
struct ata_link *link;
struct ata_device *dev;
/* tell EH we're leaving & flush EH */
/* Wait for any ongoing EH */
ata_port_wait_eh(ap);
mutex_lock(&ap->scsi_scan_mutex);
spin_lock_irqsave(ap->lock, flags);
/* Remove scsi devices */
ata_for_each_link(link, ap, HOST_FIRST) {
ata_for_each_dev(dev, link, ALL) {
if (dev->sdev) {
spin_unlock_irqrestore(ap->lock, flags);
scsi_remove_device(dev->sdev);
spin_lock_irqsave(ap->lock, flags);
dev->sdev = NULL;
}
}
}
/* Tell EH to disable all devices */
ap->pflags |= ATA_PFLAG_UNLOADING;
ata_port_schedule_eh(ap);
spin_unlock_irqrestore(ap->lock, flags);
mutex_unlock(&ap->scsi_scan_mutex);
/* wait till EH commits suicide */
ata_port_wait_eh(ap);

View File

@ -147,6 +147,8 @@ ata_eh_cmd_timeout_table[ATA_EH_CMD_TIMEOUT_TABLE_SIZE] = {
.timeouts = ata_eh_other_timeouts, },
{ .commands = CMDS(ATA_CMD_FLUSH, ATA_CMD_FLUSH_EXT),
.timeouts = ata_eh_flush_timeouts },
{ .commands = CMDS(ATA_CMD_VERIFY),
.timeouts = ata_eh_reset_timeouts },
};
#undef CMDS
@ -498,7 +500,19 @@ static void ata_eh_unload(struct ata_port *ap)
struct ata_device *dev;
unsigned long flags;
/* Restore SControl IPM and SPD for the next driver and
/*
* Unless we are restarting, transition all enabled devices to
* standby power mode.
*/
if (system_state != SYSTEM_RESTART) {
ata_for_each_link(link, ap, PMP_FIRST) {
ata_for_each_dev(dev, link, ENABLED)
ata_dev_power_set_standby(dev);
}
}
/*
* Restore SControl IPM and SPD for the next driver and
* disable attached devices.
*/
ata_for_each_link(link, ap, PMP_FIRST) {
@ -684,6 +698,10 @@ void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap)
ehc->saved_xfer_mode[devno] = dev->xfer_mode;
if (ata_ncq_enabled(dev))
ehc->saved_ncq_enabled |= 1 << devno;
/* If we are resuming, wake up the device */
if (ap->pflags & ATA_PFLAG_RESUMING)
ehc->i.dev_action[devno] |= ATA_EH_SET_ACTIVE;
}
}
@ -743,6 +761,8 @@ void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap)
/* clean up */
spin_lock_irqsave(ap->lock, flags);
ap->pflags &= ~ATA_PFLAG_RESUMING;
if (ap->pflags & ATA_PFLAG_LOADING)
ap->pflags &= ~ATA_PFLAG_LOADING;
else if ((ap->pflags & ATA_PFLAG_SCSI_HOTPLUG) &&
@ -1218,6 +1238,13 @@ void ata_eh_detach_dev(struct ata_device *dev)
struct ata_eh_context *ehc = &link->eh_context;
unsigned long flags;
/*
* If the device is still enabled, transition it to standby power mode
* (i.e. spin down HDDs).
*/
if (ata_dev_enabled(dev))
ata_dev_power_set_standby(dev);
ata_dev_disable(dev);
spin_lock_irqsave(ap->lock, flags);
@ -2305,7 +2332,7 @@ static void ata_eh_link_report(struct ata_link *link)
struct ata_eh_context *ehc = &link->eh_context;
struct ata_queued_cmd *qc;
const char *frozen, *desc;
char tries_buf[6] = "";
char tries_buf[16] = "";
int tag, nr_failed = 0;
if (ehc->i.flags & ATA_EHI_QUIET)
@ -3016,6 +3043,15 @@ static int ata_eh_revalidate_and_attach(struct ata_link *link,
if (ehc->i.flags & ATA_EHI_DID_RESET)
readid_flags |= ATA_READID_POSTRESET;
/*
* When resuming, before executing any command, make sure to
* transition the device to the active power mode.
*/
if ((action & ATA_EH_SET_ACTIVE) && ata_dev_enabled(dev)) {
ata_dev_power_set_active(dev);
ata_eh_done(link, dev, ATA_EH_SET_ACTIVE);
}
if ((action & ATA_EH_REVALIDATE) && ata_dev_enabled(dev)) {
WARN_ON(dev->class == ATA_DEV_PMP);
@ -3989,6 +4025,7 @@ static void ata_eh_handle_port_suspend(struct ata_port *ap)
unsigned long flags;
int rc = 0;
struct ata_device *dev;
struct ata_link *link;
/* are we suspending? */
spin_lock_irqsave(ap->lock, flags);
@ -4001,6 +4038,12 @@ static void ata_eh_handle_port_suspend(struct ata_port *ap)
WARN_ON(ap->pflags & ATA_PFLAG_SUSPENDED);
/* Set all devices attached to the port in standby mode */
ata_for_each_link(link, ap, HOST_FIRST) {
ata_for_each_dev(dev, link, ENABLED)
ata_dev_power_set_standby(dev);
}
/*
* If we have a ZPODD attached, check its zero
* power ready status before the port is frozen.
@ -4083,6 +4126,7 @@ static void ata_eh_handle_port_resume(struct ata_port *ap)
/* update the flags */
spin_lock_irqsave(ap->lock, flags);
ap->pflags &= ~(ATA_PFLAG_PM_PENDING | ATA_PFLAG_SUSPENDED);
ap->pflags |= ATA_PFLAG_RESUMING;
spin_unlock_irqrestore(ap->lock, flags);
}
#endif /* CONFIG_PM */

View File

@ -1050,14 +1050,13 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
}
} else {
sdev->sector_size = ata_id_logical_sector_size(dev->id);
/*
* Stop the drive on suspend but do not issue START STOP UNIT
* on resume as this is not necessary and may fail: the device
* will be woken up by ata_port_pm_resume() with a port reset
* and device revalidation.
* Ask the sd driver to issue START STOP UNIT on runtime suspend
* and resume only. For system level suspend/resume, devices
* power state is handled directly by libata EH.
*/
sdev->manage_start_stop = 1;
sdev->no_start_on_resume = 1;
sdev->manage_runtime_start_stop = true;
}
/*
@ -1089,6 +1088,42 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev)
return 0;
}
/**
* ata_scsi_slave_alloc - Early setup of SCSI device
* @sdev: SCSI device to examine
*
* This is called from scsi_alloc_sdev() when the scsi device
* associated with an ATA device is scanned on a port.
*
* LOCKING:
* Defined by SCSI layer. We don't really care.
*/
int ata_scsi_slave_alloc(struct scsi_device *sdev)
{
struct ata_port *ap = ata_shost_to_port(sdev->host);
struct device_link *link;
ata_scsi_sdev_config(sdev);
/*
* Create a link from the ata_port device to the scsi device to ensure
* that PM does suspend/resume in the correct order: the scsi device is
* consumer (child) and the ata port the supplier (parent).
*/
link = device_link_add(&sdev->sdev_gendev, &ap->tdev,
DL_FLAG_STATELESS |
DL_FLAG_PM_RUNTIME | DL_FLAG_RPM_ACTIVE);
if (!link) {
ata_port_err(ap, "Failed to create link to scsi device %s\n",
dev_name(&sdev->sdev_gendev));
return -ENODEV;
}
return 0;
}
EXPORT_SYMBOL_GPL(ata_scsi_slave_alloc);
/**
* ata_scsi_slave_config - Set SCSI device attributes
* @sdev: SCSI device to examine
@ -1105,14 +1140,11 @@ int ata_scsi_slave_config(struct scsi_device *sdev)
{
struct ata_port *ap = ata_shost_to_port(sdev->host);
struct ata_device *dev = __ata_scsi_find_dev(ap, sdev);
int rc = 0;
ata_scsi_sdev_config(sdev);
if (dev)
rc = ata_scsi_dev_config(sdev, dev);
return ata_scsi_dev_config(sdev, dev);
return rc;
return 0;
}
EXPORT_SYMBOL_GPL(ata_scsi_slave_config);
@ -1136,6 +1168,8 @@ void ata_scsi_slave_destroy(struct scsi_device *sdev)
unsigned long flags;
struct ata_device *dev;
device_link_remove(&sdev->sdev_gendev, &ap->tdev);
spin_lock_irqsave(ap->lock, flags);
dev = __ata_scsi_find_dev(ap, sdev);
if (dev && dev->sdev) {
@ -1195,7 +1229,7 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc)
}
if (cdb[4] & 0x1) {
tf->nsect = 1; /* 1 sector, lba=0 */
tf->nsect = 1; /* 1 sector, lba=0 */
if (qc->dev->flags & ATA_DFLAG_LBA) {
tf->flags |= ATA_TFLAG_LBA;
@ -1211,7 +1245,7 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc)
tf->lbah = 0x0; /* cyl high */
}
tf->command = ATA_CMD_VERIFY; /* READ VERIFY */
tf->command = ATA_CMD_VERIFY; /* READ VERIFY */
} else {
/* Some odd clown BIOSen issue spindown on power off (ACPI S4
* or S5) causing some drives to spin up and down again.
@ -1221,7 +1255,7 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc)
goto skip;
if ((qc->ap->flags & ATA_FLAG_NO_HIBERNATE_SPINDOWN) &&
system_entering_hibernation())
system_entering_hibernation())
goto skip;
/* Issue ATA STANDBY IMMEDIATE command */
@ -1835,6 +1869,9 @@ static unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf)
hdr[2] = 0x7; /* claim SPC-5 version compatibility */
}
if (args->dev->flags & ATA_DFLAG_CDL)
hdr[2] = 0xd; /* claim SPC-6 version compatibility */
memcpy(rbuf, hdr, sizeof(hdr));
memcpy(&rbuf[8], "ATA ", 8);
ata_id_string(args->id, &rbuf[16], ATA_ID_PROD, 16);
@ -4312,7 +4349,7 @@ void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd)
break;
case MAINTENANCE_IN:
if (scsicmd[1] == MI_REPORT_SUPPORTED_OPERATION_CODES)
if ((scsicmd[1] & 0x1f) == MI_REPORT_SUPPORTED_OPERATION_CODES)
ata_scsi_rbuf_fill(&args, ata_scsiop_maint_in);
else
ata_scsi_set_invalid_field(dev, cmd, 1, 0xff);
@ -4722,7 +4759,7 @@ void ata_scsi_dev_rescan(struct work_struct *work)
struct ata_link *link;
struct ata_device *dev;
unsigned long flags;
bool delay_rescan = false;
int ret = 0;
mutex_lock(&ap->scsi_scan_mutex);
spin_lock_irqsave(ap->lock, flags);
@ -4731,37 +4768,34 @@ void ata_scsi_dev_rescan(struct work_struct *work)
ata_for_each_dev(dev, link, ENABLED) {
struct scsi_device *sdev = dev->sdev;
/*
* If the port was suspended before this was scheduled,
* bail out.
*/
if (ap->pflags & ATA_PFLAG_SUSPENDED)
goto unlock;
if (!sdev)
continue;
if (scsi_device_get(sdev))
continue;
/*
* If the rescan work was scheduled because of a resume
* event, the port is already fully resumed, but the
* SCSI device may not yet be fully resumed. In such
* case, executing scsi_rescan_device() may cause a
* deadlock with the PM code on device_lock(). Prevent
* this by giving up and retrying rescan after a short
* delay.
*/
delay_rescan = sdev->sdev_gendev.power.is_suspended;
if (delay_rescan) {
scsi_device_put(sdev);
break;
}
spin_unlock_irqrestore(ap->lock, flags);
scsi_rescan_device(sdev);
ret = scsi_rescan_device(sdev);
scsi_device_put(sdev);
spin_lock_irqsave(ap->lock, flags);
if (ret)
goto unlock;
}
}
unlock:
spin_unlock_irqrestore(ap->lock, flags);
mutex_unlock(&ap->scsi_scan_mutex);
if (delay_rescan)
/* Reschedule with a delay if scsi_rescan_device() returned an error */
if (ret)
schedule_delayed_work(&ap->scsi_rescan_task,
msecs_to_jiffies(5));
}

View File

@ -266,6 +266,10 @@ void ata_tport_delete(struct ata_port *ap)
put_device(dev);
}
static const struct device_type ata_port_sas_type = {
.name = ATA_PORT_TYPE_NAME,
};
/** ata_tport_add - initialize a transport ATA port structure
*
* @parent: parent device
@ -283,7 +287,10 @@ int ata_tport_add(struct device *parent,
struct device *dev = &ap->tdev;
device_initialize(dev);
dev->type = &ata_port_type;
if (ap->flags & ATA_FLAG_SAS_HOST)
dev->type = &ata_port_sas_type;
else
dev->type = &ata_port_type;
dev->parent = parent;
ata_host_get(ap->host);

View File

@ -30,6 +30,8 @@ enum {
ATA_DNXFER_QUIET = (1 << 31),
};
#define ATA_PORT_TYPE_NAME "ata_port"
extern atomic_t ata_print_id;
extern int atapi_passthru16;
extern int libata_fua;
@ -60,6 +62,8 @@ extern int ata_dev_reread_id(struct ata_device *dev, unsigned int readid_flags);
extern int ata_dev_revalidate(struct ata_device *dev, unsigned int new_class,
unsigned int readid_flags);
extern int ata_dev_configure(struct ata_device *dev);
extern void ata_dev_power_set_standby(struct ata_device *dev);
extern void ata_dev_power_set_active(struct ata_device *dev);
extern int sata_down_spd_limit(struct ata_link *link, u32 spd_limit);
extern int ata_down_xfermask_limit(struct ata_device *dev, unsigned int sel);
extern unsigned int ata_dev_set_feature(struct ata_device *dev,

View File

@ -632,9 +632,8 @@ void rbd_warn(struct rbd_device *rbd_dev, const char *fmt, ...)
static void rbd_dev_remove_parent(struct rbd_device *rbd_dev);
static int rbd_dev_refresh(struct rbd_device *rbd_dev);
static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev);
static int rbd_dev_header_info(struct rbd_device *rbd_dev);
static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev);
static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev,
struct rbd_image_header *header);
static const char *rbd_dev_v2_snap_name(struct rbd_device *rbd_dev,
u64 snap_id);
static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id,
@ -995,15 +994,24 @@ static void rbd_init_layout(struct rbd_device *rbd_dev)
RCU_INIT_POINTER(rbd_dev->layout.pool_ns, NULL);
}
static void rbd_image_header_cleanup(struct rbd_image_header *header)
{
kfree(header->object_prefix);
ceph_put_snap_context(header->snapc);
kfree(header->snap_sizes);
kfree(header->snap_names);
memset(header, 0, sizeof(*header));
}
/*
* Fill an rbd image header with information from the given format 1
* on-disk header.
*/
static int rbd_header_from_disk(struct rbd_device *rbd_dev,
struct rbd_image_header_ondisk *ondisk)
static int rbd_header_from_disk(struct rbd_image_header *header,
struct rbd_image_header_ondisk *ondisk,
bool first_time)
{
struct rbd_image_header *header = &rbd_dev->header;
bool first_time = header->object_prefix == NULL;
struct ceph_snap_context *snapc;
char *object_prefix = NULL;
char *snap_names = NULL;
@ -1070,11 +1078,6 @@ static int rbd_header_from_disk(struct rbd_device *rbd_dev,
if (first_time) {
header->object_prefix = object_prefix;
header->obj_order = ondisk->options.order;
rbd_init_layout(rbd_dev);
} else {
ceph_put_snap_context(header->snapc);
kfree(header->snap_names);
kfree(header->snap_sizes);
}
/* The remaining fields always get updated (when we refresh) */
@ -4859,7 +4862,9 @@ static int rbd_obj_read_sync(struct rbd_device *rbd_dev,
* return, the rbd_dev->header field will contain up-to-date
* information about the image.
*/
static int rbd_dev_v1_header_info(struct rbd_device *rbd_dev)
static int rbd_dev_v1_header_info(struct rbd_device *rbd_dev,
struct rbd_image_header *header,
bool first_time)
{
struct rbd_image_header_ondisk *ondisk = NULL;
u32 snap_count = 0;
@ -4907,7 +4912,7 @@ static int rbd_dev_v1_header_info(struct rbd_device *rbd_dev)
snap_count = le32_to_cpu(ondisk->snap_count);
} while (snap_count != want_count);
ret = rbd_header_from_disk(rbd_dev, ondisk);
ret = rbd_header_from_disk(header, ondisk, first_time);
out:
kfree(ondisk);
@ -4931,39 +4936,6 @@ static void rbd_dev_update_size(struct rbd_device *rbd_dev)
}
}
static int rbd_dev_refresh(struct rbd_device *rbd_dev)
{
u64 mapping_size;
int ret;
down_write(&rbd_dev->header_rwsem);
mapping_size = rbd_dev->mapping.size;
ret = rbd_dev_header_info(rbd_dev);
if (ret)
goto out;
/*
* If there is a parent, see if it has disappeared due to the
* mapped image getting flattened.
*/
if (rbd_dev->parent) {
ret = rbd_dev_v2_parent_info(rbd_dev);
if (ret)
goto out;
}
rbd_assert(!rbd_is_snap(rbd_dev));
rbd_dev->mapping.size = rbd_dev->header.image_size;
out:
up_write(&rbd_dev->header_rwsem);
if (!ret && mapping_size != rbd_dev->mapping.size)
rbd_dev_update_size(rbd_dev);
return ret;
}
static const struct blk_mq_ops rbd_mq_ops = {
.queue_rq = rbd_queue_rq,
};
@ -5503,17 +5475,12 @@ static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id,
return 0;
}
static int rbd_dev_v2_image_size(struct rbd_device *rbd_dev)
{
return _rbd_dev_v2_snap_size(rbd_dev, CEPH_NOSNAP,
&rbd_dev->header.obj_order,
&rbd_dev->header.image_size);
}
static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev)
static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev,
char **pobject_prefix)
{
size_t size;
void *reply_buf;
char *object_prefix;
int ret;
void *p;
@ -5531,16 +5498,16 @@ static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev)
goto out;
p = reply_buf;
rbd_dev->header.object_prefix = ceph_extract_encoded_string(&p,
p + ret, NULL, GFP_NOIO);
object_prefix = ceph_extract_encoded_string(&p, p + ret, NULL,
GFP_NOIO);
if (IS_ERR(object_prefix)) {
ret = PTR_ERR(object_prefix);
goto out;
}
ret = 0;
if (IS_ERR(rbd_dev->header.object_prefix)) {
ret = PTR_ERR(rbd_dev->header.object_prefix);
rbd_dev->header.object_prefix = NULL;
} else {
dout(" object_prefix = %s\n", rbd_dev->header.object_prefix);
}
*pobject_prefix = object_prefix;
dout(" object_prefix = %s\n", object_prefix);
out:
kfree(reply_buf);
@ -5591,13 +5558,6 @@ static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,
return 0;
}
static int rbd_dev_v2_features(struct rbd_device *rbd_dev)
{
return _rbd_dev_v2_snap_features(rbd_dev, CEPH_NOSNAP,
rbd_is_ro(rbd_dev),
&rbd_dev->header.features);
}
/*
* These are generic image flags, but since they are used only for
* object map, store them in rbd_dev->object_map_flags.
@ -5634,6 +5594,14 @@ struct parent_image_info {
u64 overlap;
};
static void rbd_parent_info_cleanup(struct parent_image_info *pii)
{
kfree(pii->pool_ns);
kfree(pii->image_id);
memset(pii, 0, sizeof(*pii));
}
/*
* The caller is responsible for @pii.
*/
@ -5703,6 +5671,9 @@ static int __get_parent_info(struct rbd_device *rbd_dev,
if (pii->has_overlap)
ceph_decode_64_safe(&p, end, pii->overlap, e_inval);
dout("%s pool_id %llu pool_ns %s image_id %s snap_id %llu has_overlap %d overlap %llu\n",
__func__, pii->pool_id, pii->pool_ns, pii->image_id, pii->snap_id,
pii->has_overlap, pii->overlap);
return 0;
e_inval:
@ -5741,14 +5712,17 @@ static int __get_parent_info_legacy(struct rbd_device *rbd_dev,
pii->has_overlap = true;
ceph_decode_64_safe(&p, end, pii->overlap, e_inval);
dout("%s pool_id %llu pool_ns %s image_id %s snap_id %llu has_overlap %d overlap %llu\n",
__func__, pii->pool_id, pii->pool_ns, pii->image_id, pii->snap_id,
pii->has_overlap, pii->overlap);
return 0;
e_inval:
return -EINVAL;
}
static int get_parent_info(struct rbd_device *rbd_dev,
struct parent_image_info *pii)
static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev,
struct parent_image_info *pii)
{
struct page *req_page, *reply_page;
void *p;
@ -5776,7 +5750,7 @@ static int get_parent_info(struct rbd_device *rbd_dev,
return ret;
}
static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev)
static int rbd_dev_setup_parent(struct rbd_device *rbd_dev)
{
struct rbd_spec *parent_spec;
struct parent_image_info pii = { 0 };
@ -5786,37 +5760,12 @@ static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev)
if (!parent_spec)
return -ENOMEM;
ret = get_parent_info(rbd_dev, &pii);
ret = rbd_dev_v2_parent_info(rbd_dev, &pii);
if (ret)
goto out_err;
dout("%s pool_id %llu pool_ns %s image_id %s snap_id %llu has_overlap %d overlap %llu\n",
__func__, pii.pool_id, pii.pool_ns, pii.image_id, pii.snap_id,
pii.has_overlap, pii.overlap);
if (pii.pool_id == CEPH_NOPOOL || !pii.has_overlap) {
/*
* Either the parent never existed, or we have
* record of it but the image got flattened so it no
* longer has a parent. When the parent of a
* layered image disappears we immediately set the
* overlap to 0. The effect of this is that all new
* requests will be treated as if the image had no
* parent.
*
* If !pii.has_overlap, the parent image spec is not
* applicable. It's there to avoid duplication in each
* snapshot record.
*/
if (rbd_dev->parent_overlap) {
rbd_dev->parent_overlap = 0;
rbd_dev_parent_put(rbd_dev);
pr_info("%s: clone image has been flattened\n",
rbd_dev->disk->disk_name);
}
if (pii.pool_id == CEPH_NOPOOL || !pii.has_overlap)
goto out; /* No parent? No problem. */
}
/* The ceph file layout needs to fit pool id in 32 bits */
@ -5828,58 +5777,46 @@ static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev)
}
/*
* The parent won't change (except when the clone is
* flattened, already handled that). So we only need to
* record the parent spec we have not already done so.
* The parent won't change except when the clone is flattened,
* so we only need to record the parent image spec once.
*/
if (!rbd_dev->parent_spec) {
parent_spec->pool_id = pii.pool_id;
if (pii.pool_ns && *pii.pool_ns) {
parent_spec->pool_ns = pii.pool_ns;
pii.pool_ns = NULL;
}
parent_spec->image_id = pii.image_id;
pii.image_id = NULL;
parent_spec->snap_id = pii.snap_id;
rbd_dev->parent_spec = parent_spec;
parent_spec = NULL; /* rbd_dev now owns this */
parent_spec->pool_id = pii.pool_id;
if (pii.pool_ns && *pii.pool_ns) {
parent_spec->pool_ns = pii.pool_ns;
pii.pool_ns = NULL;
}
parent_spec->image_id = pii.image_id;
pii.image_id = NULL;
parent_spec->snap_id = pii.snap_id;
rbd_assert(!rbd_dev->parent_spec);
rbd_dev->parent_spec = parent_spec;
parent_spec = NULL; /* rbd_dev now owns this */
/*
* We always update the parent overlap. If it's zero we issue
* a warning, as we will proceed as if there was no parent.
* Record the parent overlap. If it's zero, issue a warning as
* we will proceed as if there is no parent.
*/
if (!pii.overlap) {
if (parent_spec) {
/* refresh, careful to warn just once */
if (rbd_dev->parent_overlap)
rbd_warn(rbd_dev,
"clone now standalone (overlap became 0)");
} else {
/* initial probe */
rbd_warn(rbd_dev, "clone is standalone (overlap 0)");
}
}
if (!pii.overlap)
rbd_warn(rbd_dev, "clone is standalone (overlap 0)");
rbd_dev->parent_overlap = pii.overlap;
out:
ret = 0;
out_err:
kfree(pii.pool_ns);
kfree(pii.image_id);
rbd_parent_info_cleanup(&pii);
rbd_spec_put(parent_spec);
return ret;
}
static int rbd_dev_v2_striping_info(struct rbd_device *rbd_dev)
static int rbd_dev_v2_striping_info(struct rbd_device *rbd_dev,
u64 *stripe_unit, u64 *stripe_count)
{
struct {
__le64 stripe_unit;
__le64 stripe_count;
} __attribute__ ((packed)) striping_info_buf = { 0 };
size_t size = sizeof (striping_info_buf);
void *p;
int ret;
ret = rbd_obj_method_sync(rbd_dev, &rbd_dev->header_oid,
@ -5891,27 +5828,33 @@ static int rbd_dev_v2_striping_info(struct rbd_device *rbd_dev)
if (ret < size)
return -ERANGE;
p = &striping_info_buf;
rbd_dev->header.stripe_unit = ceph_decode_64(&p);
rbd_dev->header.stripe_count = ceph_decode_64(&p);
*stripe_unit = le64_to_cpu(striping_info_buf.stripe_unit);
*stripe_count = le64_to_cpu(striping_info_buf.stripe_count);
dout(" stripe_unit = %llu stripe_count = %llu\n", *stripe_unit,
*stripe_count);
return 0;
}
static int rbd_dev_v2_data_pool(struct rbd_device *rbd_dev)
static int rbd_dev_v2_data_pool(struct rbd_device *rbd_dev, s64 *data_pool_id)
{
__le64 data_pool_id;
__le64 data_pool_buf;
int ret;
ret = rbd_obj_method_sync(rbd_dev, &rbd_dev->header_oid,
&rbd_dev->header_oloc, "get_data_pool",
NULL, 0, &data_pool_id, sizeof(data_pool_id));
NULL, 0, &data_pool_buf,
sizeof(data_pool_buf));
dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret);
if (ret < 0)
return ret;
if (ret < sizeof(data_pool_id))
if (ret < sizeof(data_pool_buf))
return -EBADMSG;
rbd_dev->header.data_pool_id = le64_to_cpu(data_pool_id);
WARN_ON(rbd_dev->header.data_pool_id == CEPH_NOPOOL);
*data_pool_id = le64_to_cpu(data_pool_buf);
dout(" data_pool_id = %lld\n", *data_pool_id);
WARN_ON(*data_pool_id == CEPH_NOPOOL);
return 0;
}
@ -6103,7 +6046,8 @@ static int rbd_spec_fill_names(struct rbd_device *rbd_dev)
return ret;
}
static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev)
static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev,
struct ceph_snap_context **psnapc)
{
size_t size;
int ret;
@ -6164,9 +6108,7 @@ static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev)
for (i = 0; i < snap_count; i++)
snapc->snaps[i] = ceph_decode_64(&p);
ceph_put_snap_context(rbd_dev->header.snapc);
rbd_dev->header.snapc = snapc;
*psnapc = snapc;
dout(" snap context seq = %llu, snap_count = %u\n",
(unsigned long long)seq, (unsigned int)snap_count);
out:
@ -6215,38 +6157,42 @@ static const char *rbd_dev_v2_snap_name(struct rbd_device *rbd_dev,
return snap_name;
}
static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev)
static int rbd_dev_v2_header_info(struct rbd_device *rbd_dev,
struct rbd_image_header *header,
bool first_time)
{
bool first_time = rbd_dev->header.object_prefix == NULL;
int ret;
ret = rbd_dev_v2_image_size(rbd_dev);
ret = _rbd_dev_v2_snap_size(rbd_dev, CEPH_NOSNAP,
first_time ? &header->obj_order : NULL,
&header->image_size);
if (ret)
return ret;
if (first_time) {
ret = rbd_dev_v2_header_onetime(rbd_dev);
ret = rbd_dev_v2_header_onetime(rbd_dev, header);
if (ret)
return ret;
}
ret = rbd_dev_v2_snap_context(rbd_dev);
if (ret && first_time) {
kfree(rbd_dev->header.object_prefix);
rbd_dev->header.object_prefix = NULL;
}
ret = rbd_dev_v2_snap_context(rbd_dev, &header->snapc);
if (ret)
return ret;
return ret;
return 0;
}
static int rbd_dev_header_info(struct rbd_device *rbd_dev)
static int rbd_dev_header_info(struct rbd_device *rbd_dev,
struct rbd_image_header *header,
bool first_time)
{
rbd_assert(rbd_image_format_valid(rbd_dev->image_format));
rbd_assert(!header->object_prefix && !header->snapc);
if (rbd_dev->image_format == 1)
return rbd_dev_v1_header_info(rbd_dev);
return rbd_dev_v1_header_info(rbd_dev, header, first_time);
return rbd_dev_v2_header_info(rbd_dev);
return rbd_dev_v2_header_info(rbd_dev, header, first_time);
}
/*
@ -6734,60 +6680,49 @@ static int rbd_dev_image_id(struct rbd_device *rbd_dev)
*/
static void rbd_dev_unprobe(struct rbd_device *rbd_dev)
{
struct rbd_image_header *header;
rbd_dev_parent_put(rbd_dev);
rbd_object_map_free(rbd_dev);
rbd_dev_mapping_clear(rbd_dev);
/* Free dynamic fields from the header, then zero it out */
header = &rbd_dev->header;
ceph_put_snap_context(header->snapc);
kfree(header->snap_sizes);
kfree(header->snap_names);
kfree(header->object_prefix);
memset(header, 0, sizeof (*header));
rbd_image_header_cleanup(&rbd_dev->header);
}
static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev)
static int rbd_dev_v2_header_onetime(struct rbd_device *rbd_dev,
struct rbd_image_header *header)
{
int ret;
ret = rbd_dev_v2_object_prefix(rbd_dev);
ret = rbd_dev_v2_object_prefix(rbd_dev, &header->object_prefix);
if (ret)
goto out_err;
return ret;
/*
* Get the and check features for the image. Currently the
* features are assumed to never change.
*/
ret = rbd_dev_v2_features(rbd_dev);
ret = _rbd_dev_v2_snap_features(rbd_dev, CEPH_NOSNAP,
rbd_is_ro(rbd_dev), &header->features);
if (ret)
goto out_err;
return ret;
/* If the image supports fancy striping, get its parameters */
if (rbd_dev->header.features & RBD_FEATURE_STRIPINGV2) {
ret = rbd_dev_v2_striping_info(rbd_dev);
if (ret < 0)
goto out_err;
}
if (rbd_dev->header.features & RBD_FEATURE_DATA_POOL) {
ret = rbd_dev_v2_data_pool(rbd_dev);
if (header->features & RBD_FEATURE_STRIPINGV2) {
ret = rbd_dev_v2_striping_info(rbd_dev, &header->stripe_unit,
&header->stripe_count);
if (ret)
goto out_err;
return ret;
}
rbd_init_layout(rbd_dev);
return 0;
if (header->features & RBD_FEATURE_DATA_POOL) {
ret = rbd_dev_v2_data_pool(rbd_dev, &header->data_pool_id);
if (ret)
return ret;
}
out_err:
rbd_dev->header.features = 0;
kfree(rbd_dev->header.object_prefix);
rbd_dev->header.object_prefix = NULL;
return ret;
return 0;
}
/*
@ -6982,13 +6917,15 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
if (!depth)
down_write(&rbd_dev->header_rwsem);
ret = rbd_dev_header_info(rbd_dev);
ret = rbd_dev_header_info(rbd_dev, &rbd_dev->header, true);
if (ret) {
if (ret == -ENOENT && !need_watch)
rbd_print_dne(rbd_dev, false);
goto err_out_probe;
}
rbd_init_layout(rbd_dev);
/*
* If this image is the one being mapped, we have pool name and
* id, image name and id, and snap name - need to fill snap id.
@ -7017,7 +6954,7 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
}
if (rbd_dev->header.features & RBD_FEATURE_LAYERING) {
ret = rbd_dev_v2_parent_info(rbd_dev);
ret = rbd_dev_setup_parent(rbd_dev);
if (ret)
goto err_out_probe;
}
@ -7043,6 +6980,107 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
return ret;
}
static void rbd_dev_update_header(struct rbd_device *rbd_dev,
struct rbd_image_header *header)
{
rbd_assert(rbd_image_format_valid(rbd_dev->image_format));
rbd_assert(rbd_dev->header.object_prefix); /* !first_time */
if (rbd_dev->header.image_size != header->image_size) {
rbd_dev->header.image_size = header->image_size;
if (!rbd_is_snap(rbd_dev)) {
rbd_dev->mapping.size = header->image_size;
rbd_dev_update_size(rbd_dev);
}
}
ceph_put_snap_context(rbd_dev->header.snapc);
rbd_dev->header.snapc = header->snapc;
header->snapc = NULL;
if (rbd_dev->image_format == 1) {
kfree(rbd_dev->header.snap_names);
rbd_dev->header.snap_names = header->snap_names;
header->snap_names = NULL;
kfree(rbd_dev->header.snap_sizes);
rbd_dev->header.snap_sizes = header->snap_sizes;
header->snap_sizes = NULL;
}
}
static void rbd_dev_update_parent(struct rbd_device *rbd_dev,
struct parent_image_info *pii)
{
if (pii->pool_id == CEPH_NOPOOL || !pii->has_overlap) {
/*
* Either the parent never existed, or we have
* record of it but the image got flattened so it no
* longer has a parent. When the parent of a
* layered image disappears we immediately set the
* overlap to 0. The effect of this is that all new
* requests will be treated as if the image had no
* parent.
*
* If !pii.has_overlap, the parent image spec is not
* applicable. It's there to avoid duplication in each
* snapshot record.
*/
if (rbd_dev->parent_overlap) {
rbd_dev->parent_overlap = 0;
rbd_dev_parent_put(rbd_dev);
pr_info("%s: clone has been flattened\n",
rbd_dev->disk->disk_name);
}
} else {
rbd_assert(rbd_dev->parent_spec);
/*
* Update the parent overlap. If it became zero, issue
* a warning as we will proceed as if there is no parent.
*/
if (!pii->overlap && rbd_dev->parent_overlap)
rbd_warn(rbd_dev,
"clone has become standalone (overlap 0)");
rbd_dev->parent_overlap = pii->overlap;
}
}
static int rbd_dev_refresh(struct rbd_device *rbd_dev)
{
struct rbd_image_header header = { 0 };
struct parent_image_info pii = { 0 };
int ret;
dout("%s rbd_dev %p\n", __func__, rbd_dev);
ret = rbd_dev_header_info(rbd_dev, &header, false);
if (ret)
goto out;
/*
* If there is a parent, see if it has disappeared due to the
* mapped image getting flattened.
*/
if (rbd_dev->parent) {
ret = rbd_dev_v2_parent_info(rbd_dev, &pii);
if (ret)
goto out;
}
down_write(&rbd_dev->header_rwsem);
rbd_dev_update_header(rbd_dev, &header);
if (rbd_dev->parent)
rbd_dev_update_parent(rbd_dev, &pii);
up_write(&rbd_dev->header_rwsem);
out:
rbd_parent_info_cleanup(&pii);
rbd_image_header_cleanup(&header);
return ret;
}
static ssize_t do_rbd_add(const char *buf, size_t count)
{
struct rbd_device *rbd_dev = NULL;

View File

@ -38,6 +38,7 @@ enum sysc_soc {
SOC_2420,
SOC_2430,
SOC_3430,
SOC_AM35,
SOC_3630,
SOC_4430,
SOC_4460,
@ -1097,6 +1098,11 @@ static int sysc_enable_module(struct device *dev)
if (ddata->cfg.quirks & (SYSC_QUIRK_SWSUP_SIDLE |
SYSC_QUIRK_SWSUP_SIDLE_ACT)) {
best_mode = SYSC_IDLE_NO;
/* Clear WAKEUP */
if (regbits->enwkup_shift >= 0 &&
ddata->cfg.sysc_val & BIT(regbits->enwkup_shift))
reg &= ~BIT(regbits->enwkup_shift);
} else {
best_mode = fls(ddata->cfg.sidlemodes) - 1;
if (best_mode > SYSC_IDLE_MASK) {
@ -1224,6 +1230,13 @@ static int sysc_disable_module(struct device *dev)
}
}
if (ddata->cfg.quirks & SYSC_QUIRK_SWSUP_SIDLE_ACT) {
/* Set WAKEUP */
if (regbits->enwkup_shift >= 0 &&
ddata->cfg.sysc_val & BIT(regbits->enwkup_shift))
reg |= BIT(regbits->enwkup_shift);
}
reg &= ~(SYSC_IDLE_MASK << regbits->sidle_shift);
reg |= best_mode << regbits->sidle_shift;
if (regbits->autoidle_shift >= 0 &&
@ -1518,16 +1531,16 @@ struct sysc_revision_quirk {
static const struct sysc_revision_quirk sysc_revision_quirks[] = {
/* These drivers need to be fixed to not use pm_runtime_irq_safe() */
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000046, 0xffffffff,
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000052, 0xffffffff,
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
/* Uarts on omap4 and later */
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x50411e03, 0xffff00ff,
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47422e03, 0xffffffff,
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47424e03, 0xffffffff,
SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE),
/* Quirks that need to be set based on the module address */
SYSC_QUIRK("mcpdm", 0x40132000, 0, 0x10, -ENODEV, 0x50000800, 0xffffffff,
@ -1862,7 +1875,7 @@ static void sysc_pre_reset_quirk_dss(struct sysc *ddata)
dev_warn(ddata->dev, "%s: timed out %08x !+ %08x\n",
__func__, val, irq_mask);
if (sysc_soc->soc == SOC_3430) {
if (sysc_soc->soc == SOC_3430 || sysc_soc->soc == SOC_AM35) {
/* Clear DSS_SDI_CONTROL */
sysc_write(ddata, 0x44, 0);
@ -2150,8 +2163,7 @@ static int sysc_reset(struct sysc *ddata)
}
if (ddata->cfg.srst_udelay)
usleep_range(ddata->cfg.srst_udelay,
ddata->cfg.srst_udelay * 2);
fsleep(ddata->cfg.srst_udelay);
if (ddata->post_reset_quirk)
ddata->post_reset_quirk(ddata);
@ -3025,6 +3037,7 @@ static void ti_sysc_idle(struct work_struct *work)
static const struct soc_device_attribute sysc_soc_match[] = {
SOC_FLAG("OMAP242*", SOC_2420),
SOC_FLAG("OMAP243*", SOC_2430),
SOC_FLAG("AM35*", SOC_AM35),
SOC_FLAG("OMAP3[45]*", SOC_3430),
SOC_FLAG("OMAP3[67]*", SOC_3630),
SOC_FLAG("OMAP443*", SOC_4430),
@ -3229,7 +3242,7 @@ static int sysc_check_active_timer(struct sysc *ddata)
* can be dropped if we stop supporting old beagleboard revisions
* A to B4 at some point.
*/
if (sysc_soc->soc == SOC_3430)
if (sysc_soc->soc == SOC_3430 || sysc_soc->soc == SOC_AM35)
error = -ENXIO;
else
error = -EBUSY;

View File

@ -96,7 +96,7 @@ static int si521xx_regmap_i2c_write(void *context, unsigned int reg,
unsigned int val)
{
struct i2c_client *i2c = context;
const u8 data[3] = { reg, 1, val };
const u8 data[2] = { reg, val };
const int count = ARRAY_SIZE(data);
int ret;
@ -146,7 +146,7 @@ static int si521xx_regmap_i2c_read(void *context, unsigned int reg,
static const struct regmap_config si521xx_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
.cache_type = REGCACHE_NONE,
.cache_type = REGCACHE_FLAT,
.max_register = SI521XX_REG_DA,
.rd_table = &si521xx_readable_table,
.wr_table = &si521xx_writeable_table,
@ -281,9 +281,10 @@ static int si521xx_probe(struct i2c_client *client)
{
const u16 chip_info = (u16)(uintptr_t)device_get_match_data(&client->dev);
const struct clk_parent_data clk_parent_data = { .index = 0 };
struct si521xx *si;
const u8 data[3] = { SI521XX_REG_BC, 1, 1 };
unsigned char name[6] = "DIFF0";
struct clk_init_data init = {};
struct si521xx *si;
int i, ret;
if (!chip_info)
@ -308,7 +309,7 @@ static int si521xx_probe(struct i2c_client *client)
"Failed to allocate register map\n");
/* Always read back 1 Byte via I2C */
ret = regmap_write(si->regmap, SI521XX_REG_BC, 1);
ret = i2c_master_send(client, data, ARRAY_SIZE(data));
if (ret < 0)
return ret;

View File

@ -118,21 +118,21 @@ enum vc3_div {
VC3_DIV5,
};
enum vc3_clk_mux {
VC3_DIFF2_MUX,
VC3_DIFF1_MUX,
VC3_SE3_MUX,
VC3_SE2_MUX,
VC3_SE1_MUX,
enum vc3_clk {
VC3_REF,
VC3_SE1,
VC3_SE2,
VC3_SE3,
VC3_DIFF1,
VC3_DIFF2,
};
enum vc3_clk {
VC3_DIFF2,
VC3_DIFF1,
VC3_SE3,
VC3_SE2,
VC3_SE1,
VC3_REF,
enum vc3_clk_mux {
VC3_SE1_MUX = VC3_SE1 - 1,
VC3_SE2_MUX = VC3_SE2 - 1,
VC3_SE3_MUX = VC3_SE3 - 1,
VC3_DIFF1_MUX = VC3_DIFF1 - 1,
VC3_DIFF2_MUX = VC3_DIFF2 - 1,
};
struct vc3_clk_data {
@ -401,11 +401,10 @@ static long vc3_pll_round_rate(struct clk_hw *hw, unsigned long rate,
/* Determine best fractional part, which is 16 bit wide */
div_frc = rate % *parent_rate;
div_frc *= BIT(16) - 1;
do_div(div_frc, *parent_rate);
vc3->div_frc = (u32)div_frc;
vc3->div_frc = min_t(u64, div64_ul(div_frc, *parent_rate), U16_MAX);
rate = (*parent_rate *
(vc3->div_int * VC3_2_POW_16 + div_frc) / VC3_2_POW_16);
(vc3->div_int * VC3_2_POW_16 + vc3->div_frc) / VC3_2_POW_16);
} else {
rate = *parent_rate * vc3->div_int;
}
@ -897,48 +896,16 @@ static struct vc3_hw_data clk_div[] = {
};
static struct vc3_hw_data clk_mux[] = {
[VC3_DIFF2_MUX] = {
[VC3_SE1_MUX] = {
.data = &(struct vc3_clk_data) {
.offs = VC3_DIFF2_CTRL_REG,
.bitmsk = VC3_DIFF2_CTRL_REG_DIFF2_CLK_SEL
.offs = VC3_SE1_DIV4_CTRL,
.bitmsk = VC3_SE1_DIV4_CTRL_SE1_CLK_SEL
},
.hw.init = &(struct clk_init_data){
.name = "diff2_mux",
.name = "se1_mux",
.ops = &vc3_clk_mux_ops,
.parent_hws = (const struct clk_hw *[]) {
&clk_div[VC3_DIV1].hw,
&clk_div[VC3_DIV3].hw
},
.num_parents = 2,
.flags = CLK_SET_RATE_PARENT
}
},
[VC3_DIFF1_MUX] = {
.data = &(struct vc3_clk_data) {
.offs = VC3_DIFF1_CTRL_REG,
.bitmsk = VC3_DIFF1_CTRL_REG_DIFF1_CLK_SEL
},
.hw.init = &(struct clk_init_data){
.name = "diff1_mux",
.ops = &vc3_clk_mux_ops,
.parent_hws = (const struct clk_hw *[]) {
&clk_div[VC3_DIV1].hw,
&clk_div[VC3_DIV3].hw
},
.num_parents = 2,
.flags = CLK_SET_RATE_PARENT
}
},
[VC3_SE3_MUX] = {
.data = &(struct vc3_clk_data) {
.offs = VC3_SE3_DIFF1_CTRL_REG,
.bitmsk = VC3_SE3_DIFF1_CTRL_REG_SE3_CLK_SEL
},
.hw.init = &(struct clk_init_data){
.name = "se3_mux",
.ops = &vc3_clk_mux_ops,
.parent_hws = (const struct clk_hw *[]) {
&clk_div[VC3_DIV2].hw,
&clk_div[VC3_DIV5].hw,
&clk_div[VC3_DIV4].hw
},
.num_parents = 2,
@ -961,21 +928,53 @@ static struct vc3_hw_data clk_mux[] = {
.flags = CLK_SET_RATE_PARENT
}
},
[VC3_SE1_MUX] = {
[VC3_SE3_MUX] = {
.data = &(struct vc3_clk_data) {
.offs = VC3_SE1_DIV4_CTRL,
.bitmsk = VC3_SE1_DIV4_CTRL_SE1_CLK_SEL
.offs = VC3_SE3_DIFF1_CTRL_REG,
.bitmsk = VC3_SE3_DIFF1_CTRL_REG_SE3_CLK_SEL
},
.hw.init = &(struct clk_init_data){
.name = "se1_mux",
.name = "se3_mux",
.ops = &vc3_clk_mux_ops,
.parent_hws = (const struct clk_hw *[]) {
&clk_div[VC3_DIV5].hw,
&clk_div[VC3_DIV2].hw,
&clk_div[VC3_DIV4].hw
},
.num_parents = 2,
.flags = CLK_SET_RATE_PARENT
}
},
[VC3_DIFF1_MUX] = {
.data = &(struct vc3_clk_data) {
.offs = VC3_DIFF1_CTRL_REG,
.bitmsk = VC3_DIFF1_CTRL_REG_DIFF1_CLK_SEL
},
.hw.init = &(struct clk_init_data){
.name = "diff1_mux",
.ops = &vc3_clk_mux_ops,
.parent_hws = (const struct clk_hw *[]) {
&clk_div[VC3_DIV1].hw,
&clk_div[VC3_DIV3].hw
},
.num_parents = 2,
.flags = CLK_SET_RATE_PARENT
}
},
[VC3_DIFF2_MUX] = {
.data = &(struct vc3_clk_data) {
.offs = VC3_DIFF2_CTRL_REG,
.bitmsk = VC3_DIFF2_CTRL_REG_DIFF2_CLK_SEL
},
.hw.init = &(struct clk_init_data){
.name = "diff2_mux",
.ops = &vc3_clk_mux_ops,
.parent_hws = (const struct clk_hw *[]) {
&clk_div[VC3_DIV1].hw,
&clk_div[VC3_DIV3].hw
},
.num_parents = 2,
.flags = CLK_SET_RATE_PARENT
}
}
};
@ -1110,7 +1109,7 @@ static int vc3_probe(struct i2c_client *client)
name, 0, CLK_SET_RATE_PARENT, 1, 1);
else
clk_out[i] = devm_clk_hw_register_fixed_factor_parent_hw(dev,
name, &clk_mux[i].hw, CLK_SET_RATE_PARENT, 1, 1);
name, &clk_mux[i - 1].hw, CLK_SET_RATE_PARENT, 1, 1);
if (IS_ERR(clk_out[i]))
return PTR_ERR(clk_out[i]);

View File

@ -800,7 +800,7 @@ static SPRD_MUX_CLK_DATA(uart1_clk, "uart1-clk", uart_parents,
0x250, 0, 3, UMS512_MUX_FLAG);
static const struct clk_parent_data thm_parents[] = {
{ .fw_name = "ext-32m" },
{ .fw_name = "ext-32k" },
{ .hw = &clk_250k.hw },
};
static SPRD_MUX_CLK_DATA(thm0_clk, "thm0-clk", thm_parents,

View File

@ -159,7 +159,7 @@ static unsigned long tegra_bpmp_clk_recalc_rate(struct clk_hw *hw,
err = tegra_bpmp_clk_transfer(clk->bpmp, &msg);
if (err < 0)
return err;
return 0;
return response.rate;
}

View File

@ -73,6 +73,7 @@ struct cxl_rcrb_info;
resource_size_t __rcrb_to_component(struct device *dev,
struct cxl_rcrb_info *ri,
enum cxl_rcrb which);
u16 cxl_rcrb_to_aer(struct device *dev, resource_size_t rcrb);
extern struct rw_semaphore cxl_dpa_rwsem;
extern struct rw_semaphore cxl_region_rwsem;

View File

@ -81,26 +81,6 @@ static void parse_hdm_decoder_caps(struct cxl_hdm *cxlhdm)
cxlhdm->interleave_mask |= GENMASK(14, 12);
}
static int map_hdm_decoder_regs(struct cxl_port *port, void __iomem *crb,
struct cxl_component_regs *regs)
{
struct cxl_register_map map = {
.dev = &port->dev,
.resource = port->component_reg_phys,
.base = crb,
.max_size = CXL_COMPONENT_REG_BLOCK_SIZE,
};
cxl_probe_component_regs(&port->dev, crb, &map.component_map);
if (!map.component_map.hdm_decoder.valid) {
dev_dbg(&port->dev, "HDM decoder registers not implemented\n");
/* unique error code to indicate no HDM decoder capability */
return -ENODEV;
}
return cxl_map_component_regs(&map, regs, BIT(CXL_CM_CAP_CAP_ID_HDM));
}
static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
{
struct cxl_hdm *cxlhdm;
@ -153,9 +133,9 @@ static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port,
struct cxl_endpoint_dvsec_info *info)
{
struct cxl_register_map *reg_map = &port->reg_map;
struct device *dev = &port->dev;
struct cxl_hdm *cxlhdm;
void __iomem *crb;
int rc;
cxlhdm = devm_kzalloc(dev, sizeof(*cxlhdm), GFP_KERNEL);
@ -164,19 +144,29 @@ struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port,
cxlhdm->port = port;
dev_set_drvdata(dev, cxlhdm);
crb = ioremap(port->component_reg_phys, CXL_COMPONENT_REG_BLOCK_SIZE);
if (!crb && info && info->mem_enabled) {
/* Memory devices can configure device HDM using DVSEC range regs. */
if (reg_map->resource == CXL_RESOURCE_NONE) {
if (!info && !info->mem_enabled) {
dev_err(dev, "No component registers mapped\n");
return ERR_PTR(-ENXIO);
}
cxlhdm->decoder_count = info->ranges;
return cxlhdm;
} else if (!crb) {
dev_err(dev, "No component registers mapped\n");
return ERR_PTR(-ENXIO);
}
rc = map_hdm_decoder_regs(port, crb, &cxlhdm->regs);
iounmap(crb);
if (rc)
if (!reg_map->component_map.hdm_decoder.valid) {
dev_dbg(&port->dev, "HDM decoder registers not implemented\n");
/* unique error code to indicate no HDM decoder capability */
return ERR_PTR(-ENODEV);
}
rc = cxl_map_component_regs(reg_map, &cxlhdm->regs,
BIT(CXL_CM_CAP_CAP_ID_HDM));
if (rc) {
dev_err(dev, "Failed to map HDM capability.\n");
return ERR_PTR(rc);
}
parse_hdm_decoder_caps(cxlhdm);
if (cxlhdm->decoder_count == 0) {

View File

@ -1402,6 +1402,8 @@ struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev)
mutex_init(&mds->mbox_mutex);
mutex_init(&mds->event.log_lock);
mds->cxlds.dev = dev;
mds->cxlds.reg_map.host = dev;
mds->cxlds.reg_map.resource = CXL_RESOURCE_NONE;
mds->cxlds.type = CXL_DEVTYPE_CLASSMEM;
return mds;

View File

@ -5,6 +5,7 @@
#include <linux/delay.h>
#include <linux/pci.h>
#include <linux/pci-doe.h>
#include <linux/aer.h>
#include <cxlpci.h>
#include <cxlmem.h>
#include <cxl.h>
@ -646,32 +647,36 @@ void read_cdat_data(struct cxl_port *port)
}
EXPORT_SYMBOL_NS_GPL(read_cdat_data, CXL);
void cxl_cor_error_detected(struct pci_dev *pdev)
static void __cxl_handle_cor_ras(struct cxl_dev_state *cxlds,
void __iomem *ras_base)
{
struct cxl_dev_state *cxlds = pci_get_drvdata(pdev);
void __iomem *addr;
u32 status;
if (!cxlds->regs.ras)
if (!ras_base)
return;
addr = cxlds->regs.ras + CXL_RAS_CORRECTABLE_STATUS_OFFSET;
addr = ras_base + CXL_RAS_CORRECTABLE_STATUS_OFFSET;
status = readl(addr);
if (status & CXL_RAS_CORRECTABLE_STATUS_MASK) {
writel(status & CXL_RAS_CORRECTABLE_STATUS_MASK, addr);
trace_cxl_aer_correctable_error(cxlds->cxlmd, status);
}
}
EXPORT_SYMBOL_NS_GPL(cxl_cor_error_detected, CXL);
static void cxl_handle_endpoint_cor_ras(struct cxl_dev_state *cxlds)
{
return __cxl_handle_cor_ras(cxlds, cxlds->regs.ras);
}
/* CXL spec rev3.0 8.2.4.16.1 */
static void header_log_copy(struct cxl_dev_state *cxlds, u32 *log)
static void header_log_copy(void __iomem *ras_base, u32 *log)
{
void __iomem *addr;
u32 *log_addr;
int i, log_u32_size = CXL_HEADERLOG_SIZE / sizeof(u32);
addr = cxlds->regs.ras + CXL_RAS_HEADER_LOG_OFFSET;
addr = ras_base + CXL_RAS_HEADER_LOG_OFFSET;
log_addr = log;
for (i = 0; i < log_u32_size; i++) {
@ -685,17 +690,18 @@ static void header_log_copy(struct cxl_dev_state *cxlds, u32 *log)
* Log the state of the RAS status registers and prepare them to log the
* next error status. Return 1 if reset needed.
*/
static bool cxl_report_and_clear(struct cxl_dev_state *cxlds)
static bool __cxl_handle_ras(struct cxl_dev_state *cxlds,
void __iomem *ras_base)
{
u32 hl[CXL_HEADERLOG_SIZE_U32];
void __iomem *addr;
u32 status;
u32 fe;
if (!cxlds->regs.ras)
if (!ras_base)
return false;
addr = cxlds->regs.ras + CXL_RAS_UNCORRECTABLE_STATUS_OFFSET;
addr = ras_base + CXL_RAS_UNCORRECTABLE_STATUS_OFFSET;
status = readl(addr);
if (!(status & CXL_RAS_UNCORRECTABLE_STATUS_MASK))
return false;
@ -703,7 +709,7 @@ static bool cxl_report_and_clear(struct cxl_dev_state *cxlds)
/* If multiple errors, log header points to first error from ctrl reg */
if (hweight32(status) > 1) {
void __iomem *rcc_addr =
cxlds->regs.ras + CXL_RAS_CAP_CONTROL_OFFSET;
ras_base + CXL_RAS_CAP_CONTROL_OFFSET;
fe = BIT(FIELD_GET(CXL_RAS_CAP_CONTROL_FE_MASK,
readl(rcc_addr)));
@ -711,13 +717,201 @@ static bool cxl_report_and_clear(struct cxl_dev_state *cxlds)
fe = status;
}
header_log_copy(cxlds, hl);
header_log_copy(ras_base, hl);
trace_cxl_aer_uncorrectable_error(cxlds->cxlmd, status, fe, hl);
writel(status & CXL_RAS_UNCORRECTABLE_STATUS_MASK, addr);
return true;
}
static bool cxl_handle_endpoint_ras(struct cxl_dev_state *cxlds)
{
return __cxl_handle_ras(cxlds, cxlds->regs.ras);
}
#ifdef CONFIG_PCIEAER_CXL
static void cxl_dport_map_rch_aer(struct cxl_dport *dport)
{
struct cxl_rcrb_info *ri = &dport->rcrb;
void __iomem *dport_aer = NULL;
resource_size_t aer_phys;
struct device *host;
if (dport->rch && ri->aer_cap) {
host = dport->reg_map.host;
aer_phys = ri->aer_cap + ri->base;
dport_aer = devm_cxl_iomap_block(host, aer_phys,
sizeof(struct aer_capability_regs));
}
dport->regs.dport_aer = dport_aer;
}
static void cxl_dport_map_regs(struct cxl_dport *dport)
{
struct cxl_register_map *map = &dport->reg_map;
struct device *dev = dport->dport_dev;
if (!map->component_map.ras.valid)
dev_dbg(dev, "RAS registers not found\n");
else if (cxl_map_component_regs(map, &dport->regs.component,
BIT(CXL_CM_CAP_CAP_ID_RAS)))
dev_dbg(dev, "Failed to map RAS capability.\n");
if (dport->rch)
cxl_dport_map_rch_aer(dport);
}
static void cxl_disable_rch_root_ints(struct cxl_dport *dport)
{
void __iomem *aer_base = dport->regs.dport_aer;
struct pci_host_bridge *bridge;
u32 aer_cmd_mask, aer_cmd;
if (!aer_base)
return;
bridge = to_pci_host_bridge(dport->dport_dev);
/*
* Disable RCH root port command interrupts.
* CXL 3.0 12.2.1.1 - RCH Downstream Port-detected Errors
*
* This sequence may not be necessary. CXL spec states disabling
* the root cmd register's interrupts is required. But, PCI spec
* shows these are disabled by default on reset.
*/
if (bridge->native_cxl_error) {
aer_cmd_mask = (PCI_ERR_ROOT_CMD_COR_EN |
PCI_ERR_ROOT_CMD_NONFATAL_EN |
PCI_ERR_ROOT_CMD_FATAL_EN);
aer_cmd = readl(aer_base + PCI_ERR_ROOT_COMMAND);
aer_cmd &= ~aer_cmd_mask;
writel(aer_cmd, aer_base + PCI_ERR_ROOT_COMMAND);
}
}
void cxl_setup_parent_dport(struct device *host, struct cxl_dport *dport)
{
struct device *dport_dev = dport->dport_dev;
struct pci_host_bridge *host_bridge;
host_bridge = to_pci_host_bridge(dport_dev);
if (host_bridge->native_cxl_error)
dport->rcrb.aer_cap = cxl_rcrb_to_aer(dport_dev, dport->rcrb.base);
dport->reg_map.host = host;
cxl_dport_map_regs(dport);
if (dport->rch)
cxl_disable_rch_root_ints(dport);
}
EXPORT_SYMBOL_NS_GPL(cxl_setup_parent_dport, CXL);
static void cxl_handle_rdport_cor_ras(struct cxl_dev_state *cxlds,
struct cxl_dport *dport)
{
return __cxl_handle_cor_ras(cxlds, dport->regs.ras);
}
static bool cxl_handle_rdport_ras(struct cxl_dev_state *cxlds,
struct cxl_dport *dport)
{
return __cxl_handle_ras(cxlds, dport->regs.ras);
}
/*
* Copy the AER capability registers using 32 bit read accesses.
* This is necessary because RCRB AER capability is MMIO mapped. Clear the
* status after copying.
*
* @aer_base: base address of AER capability block in RCRB
* @aer_regs: destination for copying AER capability
*/
static bool cxl_rch_get_aer_info(void __iomem *aer_base,
struct aer_capability_regs *aer_regs)
{
int read_cnt = sizeof(struct aer_capability_regs) / sizeof(u32);
u32 *aer_regs_buf = (u32 *)aer_regs;
int n;
if (!aer_base)
return false;
/* Use readl() to guarantee 32-bit accesses */
for (n = 0; n < read_cnt; n++)
aer_regs_buf[n] = readl(aer_base + n * sizeof(u32));
writel(aer_regs->uncor_status, aer_base + PCI_ERR_UNCOR_STATUS);
writel(aer_regs->cor_status, aer_base + PCI_ERR_COR_STATUS);
return true;
}
/* Get AER severity. Return false if there is no error. */
static bool cxl_rch_get_aer_severity(struct aer_capability_regs *aer_regs,
int *severity)
{
if (aer_regs->uncor_status & ~aer_regs->uncor_mask) {
if (aer_regs->uncor_status & PCI_ERR_ROOT_FATAL_RCV)
*severity = AER_FATAL;
else
*severity = AER_NONFATAL;
return true;
}
if (aer_regs->cor_status & ~aer_regs->cor_mask) {
*severity = AER_CORRECTABLE;
return true;
}
return false;
}
static void cxl_handle_rdport_errors(struct cxl_dev_state *cxlds)
{
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
struct aer_capability_regs aer_regs;
struct cxl_dport *dport;
struct cxl_port *port;
int severity;
port = cxl_pci_find_port(pdev, &dport);
if (!port)
return;
put_device(&port->dev);
if (!cxl_rch_get_aer_info(dport->regs.dport_aer, &aer_regs))
return;
if (!cxl_rch_get_aer_severity(&aer_regs, &severity))
return;
pci_print_aer(pdev, severity, &aer_regs);
if (severity == AER_CORRECTABLE)
cxl_handle_rdport_cor_ras(cxlds, dport);
else
cxl_handle_rdport_ras(cxlds, dport);
}
#else
static void cxl_handle_rdport_errors(struct cxl_dev_state *cxlds) { }
#endif
void cxl_cor_error_detected(struct pci_dev *pdev)
{
struct cxl_dev_state *cxlds = pci_get_drvdata(pdev);
if (cxlds->rcd)
cxl_handle_rdport_errors(cxlds);
cxl_handle_endpoint_cor_ras(cxlds);
}
EXPORT_SYMBOL_NS_GPL(cxl_cor_error_detected, CXL);
pci_ers_result_t cxl_error_detected(struct pci_dev *pdev,
pci_channel_state_t state)
{
@ -726,13 +920,16 @@ pci_ers_result_t cxl_error_detected(struct pci_dev *pdev,
struct device *dev = &cxlmd->dev;
bool ue;
if (cxlds->rcd)
cxl_handle_rdport_errors(cxlds);
/*
* A frozen channel indicates an impending reset which is fatal to
* CXL.mem operation, and will likely crash the system. On the off
* chance the situation is recoverable dump the status of the RAS
* capability registers and bounce the active state of the memdev.
*/
ue = cxl_report_and_clear(cxlds);
ue = cxl_handle_endpoint_ras(cxlds);
switch (state) {
case pci_channel_io_normal:

View File

@ -625,7 +625,6 @@ static int devm_cxl_link_parent_dport(struct device *host,
static struct lock_class_key cxl_port_key;
static struct cxl_port *cxl_port_alloc(struct device *uport_dev,
resource_size_t component_reg_phys,
struct cxl_dport *parent_dport)
{
struct cxl_port *port;
@ -676,7 +675,6 @@ static struct cxl_port *cxl_port_alloc(struct device *uport_dev,
} else
dev->parent = uport_dev;
port->component_reg_phys = component_reg_phys;
ida_init(&port->decoder_ida);
port->hdm_end = -1;
port->commit_end = -1;
@ -697,18 +695,20 @@ static struct cxl_port *cxl_port_alloc(struct device *uport_dev,
return ERR_PTR(rc);
}
static int cxl_setup_comp_regs(struct device *dev, struct cxl_register_map *map,
static int cxl_setup_comp_regs(struct device *host, struct cxl_register_map *map,
resource_size_t component_reg_phys)
{
*map = (struct cxl_register_map) {
.host = host,
.reg_type = CXL_REGLOC_RBI_EMPTY,
.resource = component_reg_phys,
};
if (component_reg_phys == CXL_RESOURCE_NONE)
return 0;
*map = (struct cxl_register_map) {
.dev = dev,
.reg_type = CXL_REGLOC_RBI_COMPONENT,
.resource = component_reg_phys,
.max_size = CXL_COMPONENT_REG_BLOCK_SIZE,
};
map->reg_type = CXL_REGLOC_RBI_COMPONENT;
map->max_size = CXL_COMPONENT_REG_BLOCK_SIZE;
return cxl_setup_regs(map);
}
@ -718,17 +718,27 @@ static int cxl_port_setup_regs(struct cxl_port *port,
{
if (dev_is_platform(port->uport_dev))
return 0;
return cxl_setup_comp_regs(&port->dev, &port->comp_map,
return cxl_setup_comp_regs(&port->dev, &port->reg_map,
component_reg_phys);
}
static int cxl_dport_setup_regs(struct cxl_dport *dport,
static int cxl_dport_setup_regs(struct device *host, struct cxl_dport *dport,
resource_size_t component_reg_phys)
{
int rc;
if (dev_is_platform(dport->dport_dev))
return 0;
return cxl_setup_comp_regs(dport->dport_dev, &dport->comp_map,
component_reg_phys);
/*
* use @dport->dport_dev for the context for error messages during
* register probing, and fixup @host after the fact, since @host may be
* NULL.
*/
rc = cxl_setup_comp_regs(dport->dport_dev, &dport->reg_map,
component_reg_phys);
dport->reg_map.host = host;
return rc;
}
static struct cxl_port *__devm_cxl_add_port(struct device *host,
@ -740,21 +750,36 @@ static struct cxl_port *__devm_cxl_add_port(struct device *host,
struct device *dev;
int rc;
port = cxl_port_alloc(uport_dev, component_reg_phys, parent_dport);
port = cxl_port_alloc(uport_dev, parent_dport);
if (IS_ERR(port))
return port;
dev = &port->dev;
if (is_cxl_memdev(uport_dev))
rc = dev_set_name(dev, "endpoint%d", port->id);
else if (parent_dport)
rc = dev_set_name(dev, "port%d", port->id);
else
rc = dev_set_name(dev, "root%d", port->id);
if (rc)
goto err;
if (is_cxl_memdev(uport_dev)) {
struct cxl_memdev *cxlmd = to_cxl_memdev(uport_dev);
struct cxl_dev_state *cxlds = cxlmd->cxlds;
rc = cxl_port_setup_regs(port, component_reg_phys);
rc = dev_set_name(dev, "endpoint%d", port->id);
if (rc)
goto err;
/*
* The endpoint driver already enumerated the component and RAS
* registers. Reuse that enumeration while prepping them to be
* mapped by the cxl_port driver.
*/
port->reg_map = cxlds->reg_map;
port->reg_map.host = &port->dev;
} else if (parent_dport) {
rc = dev_set_name(dev, "port%d", port->id);
if (rc)
goto err;
rc = cxl_port_setup_regs(port, component_reg_phys);
if (rc)
goto err;
} else
rc = dev_set_name(dev, "root%d", port->id);
if (rc)
goto err;
@ -989,7 +1014,16 @@ __devm_cxl_add_dport(struct cxl_port *port, struct device *dport_dev,
if (!dport)
return ERR_PTR(-ENOMEM);
if (rcrb != CXL_RESOURCE_NONE) {
dport->dport_dev = dport_dev;
dport->port_id = port_id;
dport->port = port;
if (rcrb == CXL_RESOURCE_NONE) {
rc = cxl_dport_setup_regs(&port->dev, dport,
component_reg_phys);
if (rc)
return ERR_PTR(rc);
} else {
dport->rcrb.base = rcrb;
component_reg_phys = __rcrb_to_component(dport_dev, &dport->rcrb,
CXL_RCRB_DOWNSTREAM);
@ -998,6 +1032,14 @@ __devm_cxl_add_dport(struct cxl_port *port, struct device *dport_dev,
return ERR_PTR(-ENXIO);
}
/*
* RCH @dport is not ready to map until associated with its
* memdev
*/
rc = cxl_dport_setup_regs(NULL, dport, component_reg_phys);
if (rc)
return ERR_PTR(rc);
dport->rch = true;
}
@ -1005,14 +1047,6 @@ __devm_cxl_add_dport(struct cxl_port *port, struct device *dport_dev,
dev_dbg(dport_dev, "Component Registers found for dport: %pa\n",
&component_reg_phys);
dport->dport_dev = dport_dev;
dport->port_id = port_id;
dport->port = port;
rc = cxl_dport_setup_regs(dport, component_reg_phys);
if (rc)
return ERR_PTR(rc);
cond_cxl_root_lock(port);
rc = add_dport(port, dport);
cond_cxl_root_unlock(port);
@ -1223,35 +1257,39 @@ static struct device *grandparent(struct device *dev)
return NULL;
}
static struct device *endpoint_host(struct cxl_port *endpoint)
{
struct cxl_port *port = to_cxl_port(endpoint->dev.parent);
if (is_cxl_root(port))
return port->uport_dev;
return &port->dev;
}
static void delete_endpoint(void *data)
{
struct cxl_memdev *cxlmd = data;
struct cxl_port *endpoint = cxlmd->endpoint;
struct cxl_port *parent_port;
struct device *parent;
struct device *host = endpoint_host(endpoint);
parent_port = cxl_mem_find_port(cxlmd, NULL);
if (!parent_port)
goto out;
parent = &parent_port->dev;
device_lock(parent);
if (parent->driver && !endpoint->dead) {
devm_release_action(parent, cxl_unlink_parent_dport, endpoint);
devm_release_action(parent, cxl_unlink_uport, endpoint);
devm_release_action(parent, unregister_port, endpoint);
device_lock(host);
if (host->driver && !endpoint->dead) {
devm_release_action(host, cxl_unlink_parent_dport, endpoint);
devm_release_action(host, cxl_unlink_uport, endpoint);
devm_release_action(host, unregister_port, endpoint);
}
cxlmd->endpoint = NULL;
device_unlock(parent);
put_device(parent);
out:
device_unlock(host);
put_device(&endpoint->dev);
put_device(host);
}
int cxl_endpoint_autoremove(struct cxl_memdev *cxlmd, struct cxl_port *endpoint)
{
struct device *host = endpoint_host(endpoint);
struct device *dev = &cxlmd->dev;
get_device(host);
get_device(&endpoint->dev);
cxlmd->endpoint = endpoint;
cxlmd->depth = endpoint->depth;
@ -2068,3 +2106,4 @@ static void cxl_core_exit(void)
subsys_initcall(cxl_core_init);
module_exit(cxl_core_exit);
MODULE_LICENSE("GPL v2");
MODULE_IMPORT_NS(CXL);

View File

@ -204,7 +204,7 @@ int cxl_map_component_regs(const struct cxl_register_map *map,
struct cxl_component_regs *regs,
unsigned long map_mask)
{
struct device *dev = map->dev;
struct device *host = map->host;
struct mapinfo {
const struct cxl_reg_map *rmap;
void __iomem **addr;
@ -216,16 +216,16 @@ int cxl_map_component_regs(const struct cxl_register_map *map,
for (i = 0; i < ARRAY_SIZE(mapinfo); i++) {
struct mapinfo *mi = &mapinfo[i];
resource_size_t phys_addr;
resource_size_t addr;
resource_size_t length;
if (!mi->rmap->valid)
continue;
if (!test_bit(mi->rmap->id, &map_mask))
continue;
phys_addr = map->resource + mi->rmap->offset;
addr = map->resource + mi->rmap->offset;
length = mi->rmap->size;
*(mi->addr) = devm_cxl_iomap_block(dev, phys_addr, length);
*(mi->addr) = devm_cxl_iomap_block(host, addr, length);
if (!*(mi->addr))
return -ENOMEM;
}
@ -237,7 +237,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_map_component_regs, CXL);
int cxl_map_device_regs(const struct cxl_register_map *map,
struct cxl_device_regs *regs)
{
struct device *dev = map->dev;
struct device *host = map->host;
resource_size_t phys_addr = map->resource;
struct mapinfo {
const struct cxl_reg_map *rmap;
@ -259,7 +259,7 @@ int cxl_map_device_regs(const struct cxl_register_map *map,
addr = phys_addr + mi->rmap->offset;
length = mi->rmap->size;
*(mi->addr) = devm_cxl_iomap_block(dev, addr, length);
*(mi->addr) = devm_cxl_iomap_block(host, addr, length);
if (!*(mi->addr))
return -ENOMEM;
}
@ -309,7 +309,7 @@ int cxl_find_regblock_instance(struct pci_dev *pdev, enum cxl_regloc_type type,
int regloc, i;
*map = (struct cxl_register_map) {
.dev = &pdev->dev,
.host = &pdev->dev,
.resource = CXL_RESOURCE_NONE,
};
@ -386,10 +386,9 @@ int cxl_count_regblock(struct pci_dev *pdev, enum cxl_regloc_type type)
}
EXPORT_SYMBOL_NS_GPL(cxl_count_regblock, CXL);
int cxl_map_pmu_regs(struct pci_dev *pdev, struct cxl_pmu_regs *regs,
struct cxl_register_map *map)
int cxl_map_pmu_regs(struct cxl_register_map *map, struct cxl_pmu_regs *regs)
{
struct device *dev = &pdev->dev;
struct device *dev = map->host;
resource_size_t phys_addr;
phys_addr = map->resource;
@ -403,15 +402,15 @@ EXPORT_SYMBOL_NS_GPL(cxl_map_pmu_regs, CXL);
static int cxl_map_regblock(struct cxl_register_map *map)
{
struct device *dev = map->dev;
struct device *host = map->host;
map->base = ioremap(map->resource, map->max_size);
if (!map->base) {
dev_err(dev, "failed to map registers\n");
dev_err(host, "failed to map registers\n");
return -ENOMEM;
}
dev_dbg(dev, "Mapped CXL Memory Device resource %pa\n", &map->resource);
dev_dbg(host, "Mapped CXL Memory Device resource %pa\n", &map->resource);
return 0;
}
@ -425,28 +424,28 @@ static int cxl_probe_regs(struct cxl_register_map *map)
{
struct cxl_component_reg_map *comp_map;
struct cxl_device_reg_map *dev_map;
struct device *dev = map->dev;
struct device *host = map->host;
void __iomem *base = map->base;
switch (map->reg_type) {
case CXL_REGLOC_RBI_COMPONENT:
comp_map = &map->component_map;
cxl_probe_component_regs(dev, base, comp_map);
dev_dbg(dev, "Set up component registers\n");
cxl_probe_component_regs(host, base, comp_map);
dev_dbg(host, "Set up component registers\n");
break;
case CXL_REGLOC_RBI_MEMDEV:
dev_map = &map->device_map;
cxl_probe_device_regs(dev, base, dev_map);
cxl_probe_device_regs(host, base, dev_map);
if (!dev_map->status.valid || !dev_map->mbox.valid ||
!dev_map->memdev.valid) {
dev_err(dev, "registers not found: %s%s%s\n",
dev_err(host, "registers not found: %s%s%s\n",
!dev_map->status.valid ? "status " : "",
!dev_map->mbox.valid ? "mbox " : "",
!dev_map->memdev.valid ? "memdev " : "");
return -ENXIO;
}
dev_dbg(dev, "Probing device registers...\n");
dev_dbg(host, "Probing device registers...\n");
break;
default:
break;
@ -470,6 +469,42 @@ int cxl_setup_regs(struct cxl_register_map *map)
}
EXPORT_SYMBOL_NS_GPL(cxl_setup_regs, CXL);
u16 cxl_rcrb_to_aer(struct device *dev, resource_size_t rcrb)
{
void __iomem *addr;
u16 offset = 0;
u32 cap_hdr;
if (WARN_ON_ONCE(rcrb == CXL_RESOURCE_NONE))
return 0;
if (!request_mem_region(rcrb, SZ_4K, dev_name(dev)))
return 0;
addr = ioremap(rcrb, SZ_4K);
if (!addr)
goto out;
cap_hdr = readl(addr + offset);
while (PCI_EXT_CAP_ID(cap_hdr) != PCI_EXT_CAP_ID_ERR) {
offset = PCI_EXT_CAP_NEXT(cap_hdr);
/* Offset 0 terminates capability list. */
if (!offset)
break;
cap_hdr = readl(addr + offset);
}
if (offset)
dev_dbg(dev, "found AER extended capability (0x%x)\n", offset);
iounmap(addr);
out:
release_mem_region(rcrb, SZ_4K);
return offset;
}
resource_size_t __rcrb_to_component(struct device *dev, struct cxl_rcrb_info *ri,
enum cxl_rcrb which)
{

Some files were not shown because too many files have changed in this diff Show More