pci-v6.13-changes

-----BEGIN PGP SIGNATURE-----
 
 iQJIBAABCgAyFiEEgMe7l+5h9hnxdsnuWYigwDrT+vwFAmdE14wUHGJoZWxnYWFz
 QGdvb2dsZS5jb20ACgkQWYigwDrT+vxMPRAAslaEhHZ06cU/I+BA0UrMJBbzOw+/
 XM2XUojxWaNMYSBPVXbtSBrfFMnox4G3hFBPK0T0HiWoc7wGx/TUVJk65ioqM8ug
 gS/U3NjSlqlnH8NHxKrb/2t0tlMvSll9WwumOD9pMFeMGFOS3fAgUk+fBqXFYsI/
 RsVRMavW9BucZ0yMHpgr0KGLPSt3HK/E1h0NLO+TN6dpFcoIq3XimKFyk1QQQgiR
 V3W21JMwjw+lDnUAsijU+RBYi5Fj6Rpqig/biRnzagVE6PJOci3ZJEBE7dGqm4LM
 UlgG6Ql/eK+bb3fPhcXxVmscj5XlEfbesX5PUzTmuj79Wq5l9hpy+0c654G79y8b
 rGiEVGM0NxmRdbuhWQUM2EsffqFlkFu7MN3gH0tP0Z0t3VTXfBcGrQJfqCcSCZG3
 5IwGdEE2kmGb5c3RApZrm+HCXdxhb3Nwc3P8c27eXDT4eqHWDJag4hzLETNBdIrn
 Rsbgry6zzAVA6lLT0uasUlWerq/I6OrueJvnEKRGKDtbw/JL6PLveR1Rvsc//cQD
 Tu4FcG81bldQTUOdHEgFyJgmSu77Gvfs5RZBV0cEtcCBc33uGJne08kOdGD4BwWJ
 dqN3wJFh5yX4jlMGmBDw0KmFIwKstfUCIoDE4Kjtal02CURhz5ZCDVGNPnSUKN0C
 hflVX0//cRkHc5g=
 =2Otz
 -----END PGP SIGNATURE-----

Merge tag 'pci-v6.13-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull PCI updates from Bjorn Helgaas:
 "Enumeration:

   - Make pci_stop_dev() and pci_destroy_dev() safe so concurrent
     callers can't stop a device multiple times, even as we migrate from
     the global pci_rescan_remove_lock to finer-grained locking (Keith
     Busch)

   - Improve pci_walk_bus() implementation by making it recursive and
     moving locking up to avoid need for a 'locked' parameter (Keith
     Busch)

   - Unexport pci_walk_bus_locked(), which is only used internally by
     the PCI core (Keith Busch)

   - Detect some Thunderbolt chips that are built-in and hence
     'trustworthy' by a heuristic since the 'ExternalFacingPort' and
     'usb4-host-interface' ACPI properties are not quite enough (Esther
     Shimanovich)

  Resource management:

   - Use PCI bus addresses (not CPU addresses) in 'ranges' properties
     when building dynamic DT nodes so systems where PCI and CPU
     addresses differ work correctly (Andrea della Porta)

   - Tidy resource sizing and assignment with helpers to reduce
     redundancy (Ilpo Järvinen)

   - Improve pdev_sort_resources() 'bogus alignment' warning to be more
     specific (Ilpo Järvinen)

  Driver binding:

   - Convert driver .remove_new() callbacks to .remove() again to finish
     the conversion from returning 'int' to being 'void' (Sergio
     Paracuellos)

   - Export pcim_request_all_regions(), a managed interface to request
     all BARs (Philipp Stanner)

   - Replace pcim_iomap_regions_request_all() with
     pcim_request_all_regions(), and pcim_iomap_table()[n] with
     pcim_iomap(n), in the following drivers: ahci, crypto qat, crypto
     octeontx2, intel_th, iwlwifi, ntb idt, serial rp2, ALSA korg1212
     (Philipp Stanner)

   - Remove the now unused pcim_iomap_regions_request_all() (Philipp
     Stanner)

   - Export pcim_iounmap_region(), a managed interface to unmap and
     release a PCI BAR (Philipp Stanner)

   - Replace pcim_iomap_regions(mask) with pcim_iomap_region(n), and
     pcim_iounmap_regions(mask) with pcim_iounmap_region(n), in the
     following drivers: fpga dfl-pci, block mtip32xx, gpio-merrifield,
     cavium (Philipp Stanner)

  Error handling:

   - Add sysfs 'reset_subordinate' to reset the entire hierarchy below a
     bridge; previously Secondary Bus Reset could only be used when
     there was a single device below a bridge (Keith Busch)

   - Warn if we reset a running device where the driver didn't register
     pci_error_handlers notification callbacks (Keith Busch)

  ASPM:

   - Disable ASPM L1 before touching L1 PM Substates to follow the spec
     closer and avoid a CPU load timeout on some platforms (Ajay
     Agarwal)

   - Set devices below Intel VMD to D0 before enabling ASPM L1 Substates
     as required per spec for all L1 Substates changes (Jian-Hong Pan)

  Power management:

   - Enable starfive controller runtime PM before probing host bridge
     (Mayank Rana)

   - Enable runtime power management for host bridges (Krishna chaitanya
     chundru)

  Power control:

   - Use of_platform_device_create() instead of of_platform_populate()
     to create pwrctl platform devices so we can control it based on the
     child nodes (Manivannan Sadhasivam)

   - Create pwrctrl platform devices only if there's a relevant power
     supply property (Manivannan Sadhasivam)

   - Add device link from the pwrctl supplier to the PCI dev to ensure
     pwrctl drivers are probed before the PCI dev driver; this avoids a
     race where pwrctl could change device power state while the PCI
     driver was active (Manivannan Sadhasivam)

   - Find pwrctl device for removal with of_find_device_by_node()
     instead of searching all children of the parent (Manivannan
     Sadhasivam)

   - Rename 'pwrctl' to 'pwrctrl' to match new bandwidth controller
     ('bwctrl') and hotplug files (Bjorn Helgaas)

  Bandwidth control:

   - Add read/modify/write locking for Link Control 2, which is used to
     manage Link speed (Ilpo Järvinen)

   - Extract Link Bandwidth Management Status check into
     pcie_lbms_seen(), where it can be shared between the bandwidth
     controller and quirks that use it to help retrain failed links
     (Ilpo Järvinen)

   - Re-add Link Bandwidth notification support with updates to address
     the reasons it was previously reverted (Alexandru Gagniuc, Ilpo
     Järvinen)

   - Add pcie_set_target_speed() and related functionality so drivers
     can manage PCIe Link speed based on thermal or other constraints
     (Ilpo Järvinen)

   - Add a thermal cooling driver to throttle PCIe Links via the
     existing thermal management framework (Ilpo Järvinen)

   - Add a userspace selftest for the PCIe bandwidth controller (Ilpo
     Järvinen)

  PCI device hotplug:

   - Add hotplug controller driver for Marvell OCTEON multi-function
     device where function 0 has a management console interface to
     enable/disable and provision various personalities for the other
     functions (Shijith Thotton)

   - Retain a reference to the pci_bus for the lifetime of a pci_slot to
     avoid a use-after-free when the thunderbolt driver resets USB4 host
     routers on boot, causing hotplug remove/add of downstream docks or
     other devices (Lukas Wunner)

   - Remove unused cpcihp struct cpci_hp_controller_ops.hardware_test
     (Guilherme Giacomo Simoes)

   - Remove unused cpqphp struct ctrl_dbg.ctrl (Christophe JAILLET)

   - Use pci_bus_read_dev_vendor_id() instead of hand-coded presence
     detection in cpqphp (Ilpo Järvinen)

   - Simplify cpqphp enumeration, which is already simple-minded and
     doesn't handle devices below hot-added bridges (Ilpo Järvinen)

  Virtualization:

   - Add ACS quirk for Wangxun FF5xxx NICs, which don't advertise an ACS
     capability but do isolate functions as though PCI_ACS_RR and
     PCI_ACS_CR were set, so the functions can be in independent IOMMU
     groups (Mengyuan Lou)

  TLP Processing Hints (TPH):

   - Add and document TLP Processing Hints (TPH) support so drivers can
     enable and disable TPH and the kernel can save/restore TPH
     configuration (Wei Huang)

   - Add TPH Steering Tag support so drivers can retrieve Steering Tag
     values associated with specific CPUs via an ACPI _DSM to improve
     performance by directing DMA writes closer to their consumers (Wei
     Huang)

  Data Object Exchange (DOE):

   - Wait up to 1 second for DOE Busy bit to clear before writing a
     request to the mailbox to avoid failures if the mailbox is still
     busy from a previous transfer (Gregory Price)

  Endpoint framework:

   - Skip attempts to allocate from endpoint controller memory window if
     the requested size is larger than the window (Damien Le Moal)

   - Add and document pci_epc_mem_map() and pci_epc_mem_unmap() to
     handle controller-specific size and alignment constraints, and add
     test cases to the endpoint test driver (Damien Le Moal)

   - Implement dwc pci_epc_ops.align_addr() so pci_epc_mem_map() can
     observe DWC-specific alignment requirements (Damien Le Moal)

   - Synchronously cancel command handler work in endpoint test before
     cleaning up DMA and BARs (Damien Le Moal)

   - Respect endpoint page size in dw_pcie_ep_align_addr() (Niklas
     Cassel)

   - Use dw_pcie_ep_align_addr() in dw_pcie_ep_raise_msi_irq() and
     dw_pcie_ep_raise_msix_irq() instead of open coding the equivalent
     (Niklas Cassel)

   - Avoid NULL dereference if Modem Host Interface Endpoint lacks
     'mmio' DT property (Zhongqiu Han)

   - Release PCI domain ID of Endpoint controller parent (not controller
     itself) and before unregistering the controller, to avoid
     use-after-free (Zijun Hu)

   - Clear secondary (not primary) EPC in pci_epc_remove_epf() when
     removing the secondary controller associated with an NTB (Zijun Hu)

  Cadence PCIe controller driver:

   - Lower severity of 'phy-names' message (Bartosz Wawrzyniak)

  Freescale i.MX6 PCIe controller driver:

   - Fix suspend/resume support on i.MX6QDL, which has a hardware
     erratum that prevents use of L2 (Stefan Eichenberger)

  Intel VMD host bridge driver:

   - Add 0xb60b and 0xb06f Device IDs for client SKUs (Nirmal Patel)

  MediaTek PCIe Gen3 controller driver:

   - Update mediatek-gen3 DT binding to require the exact number of
     clocks for each SoC (Fei Shao)

   - Add support for DT 'max-link-speed' and 'num-lanes' properties to
     restrict the link speed and width (AngeloGioacchino Del Regno)

  Microchip PolarFlare PCIe controller driver:

   - Add DT and driver support for using either of the two PolarFire
     Root Ports (Conor Dooley)

  NVIDIA Tegra194 PCIe controller driver:

   - Move endpoint controller cleanups that depend on refclk from the
     host to the notifier that tells us the host has deasserted PERST#,
     when refclk should be valid (Manivannan Sadhasivam)

  Qualcomm PCIe controller driver:

   - Add qcom SAR2130P DT binding with an additional clock (Dmitry
     Baryshkov)

   - Enable MSI interrupts if 'global' IRQ is supported, since a
     previous commit unintentionally masked them (Manivannan Sadhasivam)

   - Move endpoint controller cleanups that depend on refclk from the
     host to the notifier that tells us the host has deasserted PERST#,
     when refclk should be valid (Manivannan Sadhasivam)

   - Add DT binding and driver support for IPQ9574, with Synopsys IP
     v5.80a and Qcom IP 1.27.0 (devi priya)

   - Move the OPP "operating-points-v2" table from the
     qcom,pcie-sm8450.yaml DT binding to qcom,pcie-common.yaml, where it
     can be used by other Qcom platforms (Qiang Yu)

   - Add 'global' SPI interrupt for events like link-up, link-down to
     qcom,pcie-x1e80100 DT binding so we can start enumeration when the
     link comes up (Qiang Yu)

   - Disable ASPM L0s for qcom,pcie-x1e80100 since the PHY is not tuned
     to support this (Qiang Yu)

   - Add ops_1_21_0 for SC8280X family SoC, which doesn't use the
     'iommu-map' DT property and doesn't need BDF-to-SID translation
     (Qiang Yu)

  Rockchip PCIe controller driver:

   - Define ROCKCHIP_PCIE_AT_SIZE_ALIGN to replace magic 256 endpoint
     .align value (Damien Le Moal)

   - When unmapping an endpoint window, compute the region index instead
     of searching for it, and verify that the address was mapped (Damien
     Le Moal)

   - When mapping an endpoint window, verify that the address hasn't
     been mapped already (Damien Le Moal)

   - Implement pci_epc_ops.align_addr() for rockchip-ep (Damien Le Moal)

   - Fix MSI IRQ data mapping to observe the alignment constraint, which
     fixes intermittent page faults in memcpy_toio() and memcpy_fromio()
     (Damien Le Moal)

   - Rename rockchip_pcie_parse_ep_dt() to
     rockchip_pcie_ep_get_resources() for consistency with similar DT
     interfaces (Damien Le Moal)

   - Skip the unnecessary link train in rockchip_pcie_ep_probe() and do
     it only in the endpoint start operation (Damien Le Moal)

   - Implement pci_epc_ops.stop_link() to disable link training and
     controller configuration (Damien Le Moal)

   - Attempt link training at 5 GT/s when both partners support it
     (Damien Le Moal)

   - Add a handler for PERST# signal so we can detect host-initiated
     resets and start link training after PERST# is deasserted (Damien
     Le Moal)

  Synopsys DesignWare PCIe controller driver:

   - Clear outbound address on unmap so dw_pcie_find_index() won't match
     an ATU index that was already unmapped (Damien Le Moal)

   - Use of_property_present() instead of of_property_read_bool() when
     testing for presence of non-boolean DT properties (Rob Herring)

   - Advertise 1MB size if endpoint supports Resizable BARs, which was
     inadvertently lost in v6.11 (Niklas Cassel)

  TI J721E PCIe driver:

   - Add PCIe support for J722S SoC (Siddharth Vadapalli)

   - Delay PCIE_T_PVPERL_MS (100 ms), not just PCIE_T_PERST_CLK_US (100
     us), before deasserting PERST# to ensure power and refclk are
     stable (Siddharth Vadapalli)

  TI Keystone PCIe controller driver:

   - Set the 'ti,keystone-pcie' mode so v3.65a devices work in Root
     Complex mode (Kishon Vijay Abraham I)

   - Try to avoid unrecoverable SError for attempts to issue config
     transactions when the link is down; this is racy but the best we
     can do (Kishon Vijay Abraham I)

  Miscellaneous:

   - Reorganize kerneldoc parameter names to match order in function
     signature (Julia Lawall)

   - Fix sysfs reset_method_store() memory leak (Todd Kjos)

   - Simplify pci_create_slot() (Ilpo Järvinen)

   - Fix incorrect printf format specifiers in pcitest (Luo Yifan)"

* tag 'pci-v6.13-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci: (127 commits)
  PCI: rockchip-ep: Handle PERST# signal in EP mode
  PCI: rockchip-ep: Improve link training
  PCI: rockship-ep: Implement the pci_epc_ops::stop_link() operation
  PCI: rockchip-ep: Refactor endpoint link training enable
  PCI: rockchip-ep: Refactor rockchip_pcie_ep_probe() MSI-X hiding
  PCI: rockchip-ep: Refactor rockchip_pcie_ep_probe() memory allocations
  PCI: rockchip-ep: Rename rockchip_pcie_parse_ep_dt()
  PCI: rockchip-ep: Fix MSI IRQ data mapping
  PCI: rockchip-ep: Implement the pci_epc_ops::align_addr() operation
  PCI: rockchip-ep: Improve rockchip_pcie_ep_map_addr()
  PCI: rockchip-ep: Improve rockchip_pcie_ep_unmap_addr()
  PCI: rockchip-ep: Use a macro to define EP controller .align feature
  PCI: rockchip-ep: Fix address translation unit programming
  PCI/pwrctrl: Rename pwrctrl functions and structures
  PCI/pwrctrl: Rename pwrctl files to pwrctrl
  PCI/pwrctl: Remove pwrctl device without iterating over all children of pwrctl parent
  PCI/pwrctl: Ensure that pwrctl drivers are probed before PCI client drivers
  PCI/pwrctl: Create pwrctl device only if at least one power supply is present
  PCI/pwrctl: Use of_platform_device_create() to create pwrctl devices
  tools: PCI: Fix incorrect printf format specifiers
  ...
This commit is contained in:
Linus Torvalds 2024-11-26 18:05:44 -08:00
commit 1746db26f8
134 changed files with 4181 additions and 1172 deletions

View File

@ -163,6 +163,17 @@ Description:
will be present in sysfs. Writing 1 to this file
will perform reset.
What: /sys/bus/pci/devices/.../reset_subordinate
Date: October 2024
Contact: linux-pci@vger.kernel.org
Description:
This is visible only for bridge devices. If you want to reset
all devices attached through the subordinate bus of a specific
bridge device, writing 1 to this will try to do it. This will
affect all devices attached to the system through this bridge
similiar to writing 1 to their individual "reset" file, so use
with caution.
What: /sys/bus/pci/devices/.../vpd
Date: February 2008
Contact: Ben Hutchings <bwh@kernel.org>

View File

@ -117,6 +117,35 @@ by the PCI endpoint function driver.
The PCI endpoint function driver should use pci_epc_mem_free_addr() to
free the memory space allocated using pci_epc_mem_alloc_addr().
* pci_epc_map_addr()
A PCI endpoint function driver should use pci_epc_map_addr() to map to a RC
PCI address the CPU address of local memory obtained with
pci_epc_mem_alloc_addr().
* pci_epc_unmap_addr()
A PCI endpoint function driver should use pci_epc_unmap_addr() to unmap the
CPU address of local memory mapped to a RC address with pci_epc_map_addr().
* pci_epc_mem_map()
A PCI endpoint controller may impose constraints on the RC PCI addresses that
can be mapped. The function pci_epc_mem_map() allows endpoint function
drivers to allocate and map controller memory while handling such
constraints. This function will determine the size of the memory that must be
allocated with pci_epc_mem_alloc_addr() for successfully mapping a RC PCI
address range. This function will also indicate the size of the PCI address
range that was actually mapped, which can be less than the requested size, as
well as the offset into the allocated memory to use for accessing the mapped
RC PCI address range.
* pci_epc_mem_unmap()
A PCI endpoint function driver can use pci_epc_mem_unmap() to unmap and free
controller memory that was allocated and mapped using pci_epc_mem_map().
Other EPC APIs
~~~~~~~~~~~~~~

View File

@ -18,3 +18,4 @@ PCI Bus Subsystem
pcieaer-howto
endpoint/index
boot-interrupts
tph

View File

@ -217,8 +217,12 @@ capability structure except the PCI Express capability structure,
that is shared between many drivers including the service drivers.
RMW Capability accessors (pcie_capability_clear_and_set_word(),
pcie_capability_set_word(), and pcie_capability_clear_word()) protect
a selected set of PCI Express Capability Registers (Link Control
Register and Root Control Register). Any change to those registers
should be performed using RMW accessors to avoid problems due to
concurrent updates. For the up-to-date list of protected registers,
see pcie_capability_clear_and_set_word().
a selected set of PCI Express Capability Registers:
* Link Control Register
* Root Control Register
* Link Control 2 Register
Any change to those registers should be performed using RMW accessors to
avoid problems due to concurrent updates. For the up-to-date list of
protected registers, see pcie_capability_clear_and_set_word().

132
Documentation/PCI/tph.rst Normal file
View File

@ -0,0 +1,132 @@
.. SPDX-License-Identifier: GPL-2.0
===========
TPH Support
===========
:Copyright: 2024 Advanced Micro Devices, Inc.
:Authors: - Eric van Tassell <eric.vantassell@amd.com>
- Wei Huang <wei.huang2@amd.com>
Overview
========
TPH (TLP Processing Hints) is a PCIe feature that allows endpoint devices
to provide optimization hints for requests that target memory space.
These hints, in a format called Steering Tags (STs), are embedded in the
requester's TLP headers, enabling the system hardware, such as the Root
Complex, to better manage platform resources for these requests.
For example, on platforms with TPH-based direct data cache injection
support, an endpoint device can include appropriate STs in its DMA
traffic to specify which cache the data should be written to. This allows
the CPU core to have a higher probability of getting data from cache,
potentially improving performance and reducing latency in data
processing.
How to Use TPH
==============
TPH is presented as an optional extended capability in PCIe. The Linux
kernel handles TPH discovery during boot, but it is up to the device
driver to request TPH enablement if it is to be utilized. Once enabled,
the driver uses the provided API to obtain the Steering Tag for the
target memory and to program the ST into the device's ST table.
Enable TPH support in Linux
---------------------------
To support TPH, the kernel must be built with the CONFIG_PCIE_TPH option
enabled.
Manage TPH
----------
To enable TPH for a device, use the following function::
int pcie_enable_tph(struct pci_dev *pdev, int mode);
This function enables TPH support for device with a specific ST mode.
Current supported modes include:
* PCI_TPH_ST_NS_MODE - NO ST Mode
* PCI_TPH_ST_IV_MODE - Interrupt Vector Mode
* PCI_TPH_ST_DS_MODE - Device Specific Mode
`pcie_enable_tph()` checks whether the requested mode is actually
supported by the device before enabling. The device driver can figure out
which TPH mode is supported and can be properly enabled based on the
return value of `pcie_enable_tph()`.
To disable TPH, use the following function::
void pcie_disable_tph(struct pci_dev *pdev);
Manage ST
---------
Steering Tags are platform specific. PCIe spec does not specify where STs
are from. Instead PCI Firmware Specification defines an ACPI _DSM method
(see the `Revised _DSM for Cache Locality TPH Features ECN
<https://members.pcisig.com/wg/PCI-SIG/document/15470>`_) for retrieving
STs for a target memory of various properties. This method is what is
supported in this implementation.
To retrieve a Steering Tag for a target memory associated with a specific
CPU, use the following function::
int pcie_tph_get_cpu_st(struct pci_dev *pdev, enum tph_mem_type type,
unsigned int cpu_uid, u16 *tag);
The `type` argument is used to specify the memory type, either volatile
or persistent, of the target memory. The `cpu_uid` argument specifies the
CPU where the memory is associated to.
After the ST value is retrieved, the device driver can use the following
function to write the ST into the device::
int pcie_tph_set_st_entry(struct pci_dev *pdev, unsigned int index,
u16 tag);
The `index` argument is the ST table entry index the ST tag will be
written into. `pcie_tph_set_st_entry()` will figure out the proper
location of ST table, either in the MSI-X table or in the TPH Extended
Capability space, and write the Steering Tag into the ST entry pointed by
the `index` argument.
It is completely up to the driver to decide how to use these TPH
functions. For example a network device driver can use the TPH APIs above
to update the Steering Tag when interrupt affinity of a RX/TX queue has
been changed. Here is a sample code for IRQ affinity notifier:
.. code-block:: c
static void irq_affinity_notified(struct irq_affinity_notify *notify,
const cpumask_t *mask)
{
struct drv_irq *irq;
unsigned int cpu_id;
u16 tag;
irq = container_of(notify, struct drv_irq, affinity_notify);
cpumask_copy(irq->cpu_mask, mask);
/* Pick a right CPU as the target - here is just an example */
cpu_id = cpumask_first(irq->cpu_mask);
if (pcie_tph_get_cpu_st(irq->pdev, TPH_MEM_TYPE_VM, cpu_id,
&tag))
return;
if (pcie_tph_set_st_entry(irq->pdev, irq->msix_nr, tag))
return;
}
Disable TPH system-wide
-----------------------
There is a kernel command line option available to control TPH feature:
* "notph": TPH will be disabled for all endpoint devices.

View File

@ -4686,6 +4686,10 @@
nomio [S390] Do not use MIO instructions.
norid [S390] ignore the RID field and force use of
one PCI domain per PCI function
notph [PCIE] If the PCIE_TPH kernel config parameter
is enabled, this kernel boot option can be used
to disable PCIe TLP Processing Hints support
system-wide.
pcie_aspm= [PCIE] Forcibly enable or ignore PCIe Active State Power
Management.

View File

@ -149,7 +149,7 @@ allOf:
then:
properties:
clocks:
minItems: 4
minItems: 6
clock-names:
items:
@ -178,7 +178,7 @@ allOf:
then:
properties:
clocks:
minItems: 4
minItems: 6
clock-names:
items:
@ -207,6 +207,7 @@ allOf:
properties:
clocks:
minItems: 4
maxItems: 4
clock-names:
items:

View File

@ -17,6 +17,12 @@ properties:
compatible:
const: microchip,pcie-host-1.0 # PolarFire
reg:
minItems: 3
reg-names:
minItems: 3
clocks:
description:
Fabric Interface Controllers, FICs, are the interface between the FPGA
@ -62,8 +68,9 @@ examples:
pcie0: pcie@2030000000 {
compatible = "microchip,pcie-host-1.0";
reg = <0x0 0x70000000 0x0 0x08000000>,
<0x0 0x43000000 0x0 0x00010000>;
reg-names = "cfg", "apb";
<0x0 0x43008000 0x0 0x00002000>,
<0x0 0x4300a000 0x0 0x00002000>;
reg-names = "cfg", "bridge", "ctrl";
device_type = "pci";
#address-cells = <3>;
#size-cells = <2>;

View File

@ -18,12 +18,18 @@ allOf:
properties:
reg:
maxItems: 2
maxItems: 3
minItems: 2
reg-names:
items:
- const: cfg
- const: apb
oneOf:
- items:
- const: cfg
- const: apb
- items:
- const: cfg
- const: bridge
- const: ctrl
interrupts:
minItems: 1

View File

@ -81,6 +81,10 @@ properties:
vddpe-3v3-supply:
description: PCIe endpoint power supply
operating-points-v2: true
opp-table:
type: object
required:
- reg
- reg-names

View File

@ -70,10 +70,6 @@ properties:
- const: msi7
- const: global
operating-points-v2: true
opp-table:
type: object
resets:
maxItems: 1

View File

@ -20,6 +20,7 @@ properties:
- const: qcom,pcie-sm8550
- items:
- enum:
- qcom,sar2130p-pcie
- qcom,pcie-sm8650
- const: qcom,pcie-sm8550
@ -39,7 +40,7 @@ properties:
clocks:
minItems: 7
maxItems: 8
maxItems: 9
clock-names:
minItems: 7
@ -52,6 +53,7 @@ properties:
- const: ddrss_sf_tbu # PCIe SF TBU clock
- const: noc_aggr # Aggre NoC PCIe AXI clock
- const: cnoc_sf_axi # Config NoC PCIe1 AXI clock
- const: qmip_pcie_ahb # QMIP PCIe AHB clock
interrupts:
minItems: 8

View File

@ -47,9 +47,10 @@ properties:
interrupts:
minItems: 8
maxItems: 8
maxItems: 9
interrupt-names:
minItems: 8
items:
- const: msi0
- const: msi1
@ -59,6 +60,7 @@ properties:
- const: msi5
- const: msi6
- const: msi7
- const: global
resets:
minItems: 1
@ -130,9 +132,10 @@ examples:
<GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
<GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "msi0", "msi1", "msi2", "msi3",
"msi4", "msi5", "msi6", "msi7";
"msi4", "msi5", "msi6", "msi7", "global";
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 0x7>;
interrupt-map = <0 0 0 1 &intc 0 0 0 149 IRQ_TYPE_LEVEL_HIGH>, /* int_a */

View File

@ -26,6 +26,7 @@ properties:
- qcom,pcie-ipq8064-v2
- qcom,pcie-ipq8074
- qcom,pcie-ipq8074-gen3
- qcom,pcie-ipq9574
- qcom,pcie-msm8996
- qcom,pcie-qcs404
- qcom,pcie-sdm845
@ -164,6 +165,7 @@ allOf:
enum:
- qcom,pcie-ipq6018
- qcom,pcie-ipq8074-gen3
- qcom,pcie-ipq9574
then:
properties:
reg:
@ -400,6 +402,53 @@ allOf:
- const: axi_m_sticky # AXI Master Sticky reset
- const: axi_s_sticky # AXI Slave Sticky reset
- if:
properties:
compatible:
contains:
enum:
- qcom,pcie-ipq9574
then:
properties:
clocks:
minItems: 6
maxItems: 6
clock-names:
items:
- const: axi_m # AXI Master clock
- const: axi_s # AXI Slave clock
- const: axi_bridge
- const: rchng
- const: ahb
- const: aux
resets:
minItems: 8
maxItems: 8
reset-names:
items:
- const: pipe # PIPE reset
- const: sticky # Core Sticky reset
- const: axi_s_sticky # AXI Slave Sticky reset
- const: axi_s # AXI Slave reset
- const: axi_m_sticky # AXI Master Sticky reset
- const: axi_m # AXI Master reset
- const: aux # AUX Reset
- const: ahb # AHB Reset
interrupts:
minItems: 8
interrupt-names:
items:
- const: msi0
- const: msi1
- const: msi2
- const: msi3
- const: msi4
- const: msi5
- const: msi6
- const: msi7
- if:
properties:
compatible:
@ -510,6 +559,7 @@ allOf:
- qcom,pcie-ipq8064v2
- qcom,pcie-ipq8074
- qcom,pcie-ipq8074-gen3
- qcom,pcie-ipq9574
- qcom,pcie-qcs404
then:
required:

View File

@ -230,7 +230,6 @@ examples:
interrupts = <25>, <24>;
interrupt-names = "msi", "hp";
#interrupt-cells = <1>;
reset-gpios = <&port0 0 1>;

View File

@ -16,6 +16,13 @@ properties:
compatible:
const: starfive,jh7110-pcie
reg:
maxItems: 2
reg-names:
maxItems: 2
clocks:
items:
- description: NOC bus clock

View File

@ -394,7 +394,6 @@ PCI
pcim_enable_device() : after success, some PCI ops become managed
pcim_iomap() : do iomap() on a single BAR
pcim_iomap_regions() : do request_region() and iomap() on multiple BARs
pcim_iomap_regions_request_all() : do request_region() on all and iomap() on multiple BARs
pcim_iomap_table() : array of mapped addresses indexed by BAR
pcim_iounmap() : do iounmap() on a single BAR
pcim_iounmap_regions() : do iounmap() and release_region() on multiple BARs

View File

@ -46,6 +46,9 @@ PCI Support Library
.. kernel-doc:: drivers/pci/pci-sysfs.c
:internal:
.. kernel-doc:: drivers/pci/tph.c
:export:
PCI Hotplug Support Library
---------------------------

View File

@ -13927,6 +13927,12 @@ R: schalla@marvell.com
R: vattunuru@marvell.com
F: drivers/vdpa/octeon_ep/
MARVELL OCTEON HOTPLUG DRIVER
R: Shijith Thotton <sthotton@marvell.com>
R: Vamsi Attunuru <vattunuru@marvell.com>
S: Supported
F: drivers/pci/hotplug/octep_hp.c
MATROX FRAMEBUFFER DRIVER
L: linux-fbdev@vger.kernel.org
S: Orphan
@ -17994,8 +18000,8 @@ M: Bartosz Golaszewski <brgl@bgdev.pl>
L: linux-pci@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git
F: drivers/pci/pwrctl/*
F: include/linux/pci-pwrctl.h
F: drivers/pci/pwrctrl/*
F: include/linux/pci-pwrctrl.h
PCI SUBSYSTEM
M: Bjorn Helgaas <bhelgaas@google.com>
@ -18017,6 +18023,15 @@ F: include/linux/of_pci.h
F: include/linux/pci*
F: include/uapi/linux/pci*
PCIE BANDWIDTH CONTROLLER
M: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
L: linux-pci@vger.kernel.org
S: Supported
F: drivers/pci/pcie/bwctrl.c
F: drivers/thermal/pcie_cooling.c
F: include/linux/pci-bwctrl.h
F: tools/testing/selftests/pcie_bwctrl/
PCIE DRIVER FOR AMAZON ANNAPURNA LABS
M: Jonathan Chocron <jonnyc@amazon.com>
L: linux-pci@vger.kernel.org

View File

@ -53,7 +53,7 @@ static int zpci_bus_prepare_device(struct zpci_dev *zdev)
zpci_setup_bus_resources(zdev);
for (i = 0; i < PCI_STD_NUM_BARS; i++) {
if (zdev->bars[i].res)
pci_bus_add_resource(zdev->zbus->bus, zdev->bars[i].res, 0);
pci_bus_add_resource(zdev->zbus->bus, zdev->bars[i].res);
}
}

View File

@ -250,6 +250,125 @@ void __init pci_acpi_crs_quirks(void)
pr_info("Please notify linux-pci@vger.kernel.org so future kernels can do this automatically\n");
}
/*
* Check if pdev is part of a PCIe switch that is directly below the
* specified bridge.
*/
static bool pcie_switch_directly_under(struct pci_dev *bridge,
struct pci_dev *pdev)
{
struct pci_dev *parent = pci_upstream_bridge(pdev);
/* If the device doesn't have a parent, it's not under anything */
if (!parent)
return false;
/*
* If the device has a PCIe type, check if it is below the
* corresponding PCIe switch components (if applicable). Then check
* if its upstream port is directly beneath the specified bridge.
*/
switch (pci_pcie_type(pdev)) {
case PCI_EXP_TYPE_UPSTREAM:
return parent == bridge;
case PCI_EXP_TYPE_DOWNSTREAM:
if (pci_pcie_type(parent) != PCI_EXP_TYPE_UPSTREAM)
return false;
parent = pci_upstream_bridge(parent);
return parent == bridge;
case PCI_EXP_TYPE_ENDPOINT:
if (pci_pcie_type(parent) != PCI_EXP_TYPE_DOWNSTREAM)
return false;
parent = pci_upstream_bridge(parent);
if (!parent || pci_pcie_type(parent) != PCI_EXP_TYPE_UPSTREAM)
return false;
parent = pci_upstream_bridge(parent);
return parent == bridge;
}
return false;
}
static bool pcie_has_usb4_host_interface(struct pci_dev *pdev)
{
struct fwnode_handle *fwnode;
/*
* For USB4, the tunneled PCIe Root or Downstream Ports are marked
* with the "usb4-host-interface" ACPI property, so we look for
* that first. This should cover most cases.
*/
fwnode = fwnode_find_reference(dev_fwnode(&pdev->dev),
"usb4-host-interface", 0);
if (!IS_ERR(fwnode)) {
fwnode_handle_put(fwnode);
return true;
}
/*
* Any integrated Thunderbolt 3/4 PCIe Root Ports from Intel
* before Alder Lake do not have the "usb4-host-interface"
* property so we use their PCI IDs instead. All these are
* tunneled. This list is not expected to grow.
*/
if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
switch (pdev->device) {
/* Ice Lake Thunderbolt 3 PCIe Root Ports */
case 0x8a1d:
case 0x8a1f:
case 0x8a21:
case 0x8a23:
/* Tiger Lake-LP Thunderbolt 4 PCIe Root Ports */
case 0x9a23:
case 0x9a25:
case 0x9a27:
case 0x9a29:
/* Tiger Lake-H Thunderbolt 4 PCIe Root Ports */
case 0x9a2b:
case 0x9a2d:
case 0x9a2f:
case 0x9a31:
return true;
}
}
return false;
}
bool arch_pci_dev_is_removable(struct pci_dev *pdev)
{
struct pci_dev *parent, *root;
/* pdev without a parent or Root Port is never tunneled */
parent = pci_upstream_bridge(pdev);
if (!parent)
return false;
root = pcie_find_root_port(pdev);
if (!root)
return false;
/* Internal PCIe devices are not tunneled */
if (!root->external_facing)
return false;
/* Anything directly behind a "usb4-host-interface" is tunneled */
if (pcie_has_usb4_host_interface(parent))
return true;
/*
* Check if this is a discrete Thunderbolt/USB4 controller that is
* directly behind the non-USB4 PCIe Root Port marked as
* "ExternalFacingPort". Those are not behind a PCIe tunnel.
*/
if (pcie_switch_directly_under(root, pdev))
return false;
/* PCIe devices after the discrete chip are tunneled */
return true;
}
#ifdef CONFIG_PCI_MMCONFIG
static int check_segment(u16 seg, struct device *dev, char *estr)
{

View File

@ -757,7 +757,7 @@ static void pci_amd_enable_64bit_bar(struct pci_dev *dev)
dev_info(&dev->dev, "adding root bus resource %pR (tainting kernel)\n",
res);
add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
pci_bus_add_resource(dev->bus, res, 0);
pci_bus_add_resource(dev->bus, res);
}
base = ((res->start >> 8) & AMD_141b_MMIO_BASE_MMIOBASE_MASK) |

View File

@ -370,7 +370,7 @@ static int acard_ahci_init_one(struct pci_dev *pdev, const struct pci_device_id
/* AHCI controllers often implement SFF compatible interface.
* Grab all PCI BARs just in case.
*/
rc = pcim_iomap_regions_request_all(pdev, 1 << AHCI_PCI_BAR, DRV_NAME);
rc = pcim_request_all_regions(pdev, DRV_NAME);
if (rc == -EBUSY)
pcim_pin_device(pdev);
if (rc)
@ -386,7 +386,9 @@ static int acard_ahci_init_one(struct pci_dev *pdev, const struct pci_device_id
if (!(hpriv->flags & AHCI_HFLAG_NO_MSI))
pci_enable_msi(pdev);
hpriv->mmio = pcim_iomap_table(pdev)[AHCI_PCI_BAR];
hpriv->mmio = pcim_iomap(pdev, AHCI_PCI_BAR, 0);
if (!hpriv->mmio)
return -ENOMEM;
/* save initial config */
ahci_save_initial_config(&pdev->dev, hpriv);

View File

@ -1869,7 +1869,7 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
/* AHCI controllers often implement SFF compatible interface.
* Grab all PCI BARs just in case.
*/
rc = pcim_iomap_regions_request_all(pdev, 1 << ahci_pci_bar, DRV_NAME);
rc = pcim_request_all_regions(pdev, DRV_NAME);
if (rc == -EBUSY)
pcim_pin_device(pdev);
if (rc)
@ -1893,7 +1893,9 @@ static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ahci_sb600_enable_64bit(pdev))
hpriv->flags &= ~AHCI_HFLAG_32BIT_ONLY;
hpriv->mmio = pcim_iomap_table(pdev)[ahci_pci_bar];
hpriv->mmio = pcim_iomap(pdev, ahci_pci_bar, 0);
if (!hpriv->mmio)
return -ENOMEM;
/* detect remapped nvme devices */
ahci_remap_check(pdev, ahci_pci_bar, hpriv);

View File

@ -129,16 +129,21 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Find and map all the device's BARS */
bar_mask = pci_select_bars(pdev, IORESOURCE_MEM) & ADF_GEN4_BAR_MASK;
ret = pcim_iomap_regions_request_all(pdev, bar_mask, pci_name(pdev));
ret = pcim_request_all_regions(pdev, pci_name(pdev));
if (ret) {
dev_err(&pdev->dev, "Failed to map pci regions.\n");
dev_err(&pdev->dev, "Failed to request PCI regions.\n");
goto out_err;
}
i = 0;
for_each_set_bit(bar_nr, &bar_mask, PCI_STD_NUM_BARS) {
bar = &accel_pci_dev->pci_bars[i++];
bar->virt_addr = pcim_iomap_table(pdev)[bar_nr];
bar->virt_addr = pcim_iomap(pdev, bar_nr, 0);
if (!bar->virt_addr) {
dev_err(&pdev->dev, "Failed to ioremap PCI region.\n");
ret = -ENOMEM;
goto out_err;
}
}
pci_set_master(pdev);

View File

@ -131,16 +131,21 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Find and map all the device's BARS */
bar_mask = pci_select_bars(pdev, IORESOURCE_MEM) & ADF_GEN4_BAR_MASK;
ret = pcim_iomap_regions_request_all(pdev, bar_mask, pci_name(pdev));
ret = pcim_request_all_regions(pdev, pci_name(pdev));
if (ret) {
dev_err(&pdev->dev, "Failed to map pci regions.\n");
dev_err(&pdev->dev, "Failed to request PCI regions.\n");
goto out_err;
}
i = 0;
for_each_set_bit(bar_nr, &bar_mask, PCI_STD_NUM_BARS) {
bar = &accel_pci_dev->pci_bars[i++];
bar->virt_addr = pcim_iomap_table(pdev)[bar_nr];
bar->virt_addr = pcim_iomap(pdev, bar_nr, 0);
if (!bar->virt_addr) {
dev_err(&pdev->dev, "Failed to ioremap PCI region.\n");
ret = -ENOMEM;
goto out_err;
}
}
pci_set_master(pdev);

View File

@ -739,18 +739,22 @@ static int otx2_cptpf_probe(struct pci_dev *pdev,
dev_err(dev, "Unable to get usable DMA configuration\n");
goto clear_drvdata;
}
/* Map PF's configuration registers */
err = pcim_iomap_regions_request_all(pdev, 1 << PCI_PF_REG_BAR_NUM,
OTX2_CPT_DRV_NAME);
err = pcim_request_all_regions(pdev, OTX2_CPT_DRV_NAME);
if (err) {
dev_err(dev, "Couldn't get PCI resources 0x%x\n", err);
dev_err(dev, "Couldn't request PCI resources 0x%x\n", err);
goto clear_drvdata;
}
pci_set_master(pdev);
pci_set_drvdata(pdev, cptpf);
cptpf->pdev = pdev;
cptpf->reg_base = pcim_iomap_table(pdev)[PCI_PF_REG_BAR_NUM];
/* Map PF's configuration registers */
cptpf->reg_base = pcim_iomap(pdev, PCI_PF_REG_BAR_NUM, 0);
if (!cptpf->reg_base) {
err = -ENOMEM;
dev_err(dev, "Couldn't ioremap PCI resource 0x%x\n", err);
goto clear_drvdata;
}
/* Check if AF driver is up, otherwise defer probe */
err = cpt_is_pf_usable(cptpf);

View File

@ -358,9 +358,8 @@ static int otx2_cptvf_probe(struct pci_dev *pdev,
dev_err(dev, "Unable to get usable DMA configuration\n");
goto clear_drvdata;
}
/* Map VF's configuration registers */
ret = pcim_iomap_regions_request_all(pdev, 1 << PCI_PF_REG_BAR_NUM,
OTX2_CPTVF_DRV_NAME);
ret = pcim_request_all_regions(pdev, OTX2_CPTVF_DRV_NAME);
if (ret) {
dev_err(dev, "Couldn't get PCI resources 0x%x\n", ret);
goto clear_drvdata;
@ -369,7 +368,13 @@ static int otx2_cptvf_probe(struct pci_dev *pdev,
pci_set_drvdata(pdev, cptvf);
cptvf->pdev = pdev;
cptvf->reg_base = pcim_iomap_table(pdev)[PCI_PF_REG_BAR_NUM];
/* Map VF's configuration registers */
cptvf->reg_base = pcim_iomap(pdev, PCI_PF_REG_BAR_NUM, 0);
if (!cptvf->reg_base) {
ret = -ENOMEM;
dev_err(dev, "Couldn't ioremap PCI resource 0x%x\n", ret);
goto clear_drvdata;
}
otx2_cpt_set_hw_caps(pdev, &cptvf->cap_flag);

View File

@ -39,14 +39,6 @@ struct cci_drvdata {
struct dfl_fpga_cdev *cdev; /* container device */
};
static void __iomem *cci_pci_ioremap_bar0(struct pci_dev *pcidev)
{
if (pcim_iomap_regions(pcidev, BIT(0), DRV_NAME))
return NULL;
return pcim_iomap_table(pcidev)[0];
}
static int cci_pci_alloc_irq(struct pci_dev *pcidev)
{
int ret, nvec = pci_msix_vec_count(pcidev);
@ -235,9 +227,9 @@ static int find_dfls_by_default(struct pci_dev *pcidev,
u64 v;
/* start to find Device Feature List from Bar 0 */
base = cci_pci_ioremap_bar0(pcidev);
if (!base)
return -ENOMEM;
base = pcim_iomap_region(pcidev, 0, DRV_NAME);
if (IS_ERR(base))
return PTR_ERR(base);
/*
* PF device has FME and Ports/AFUs, and VF device only has one
@ -296,7 +288,7 @@ static int find_dfls_by_default(struct pci_dev *pcidev,
}
/* release I/O mappings for next step enumeration */
pcim_iounmap_regions(pcidev, BIT(0));
pcim_iounmap_region(pcidev, 0);
return ret;
}

View File

@ -78,24 +78,25 @@ static int mrfld_gpio_probe(struct pci_dev *pdev, const struct pci_device_id *id
if (retval)
return retval;
retval = pcim_iomap_regions(pdev, BIT(1) | BIT(0), pci_name(pdev));
if (retval)
return dev_err_probe(dev, retval, "I/O memory mapping error\n");
base = pcim_iomap_table(pdev)[1];
base = pcim_iomap_region(pdev, 1, pci_name(pdev));
if (IS_ERR(base))
return dev_err_probe(dev, PTR_ERR(base), "I/O memory mapping error\n");
irq_base = readl(base + 0 * sizeof(u32));
gpio_base = readl(base + 1 * sizeof(u32));
/* Release the IO mapping, since we already get the info from BAR1 */
pcim_iounmap_regions(pdev, BIT(1));
pcim_iounmap_region(pdev, 1);
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
priv->dev = dev;
priv->reg_base = pcim_iomap_table(pdev)[0];
priv->reg_base = pcim_iomap_region(pdev, 0, pci_name(pdev));
if (IS_ERR(priv->reg_base))
return dev_err_probe(dev, PTR_ERR(priv->reg_base),
"I/O memory mapping error\n");
priv->pin_info.pin_ranges = mrfld_gpio_ranges;
priv->pin_info.nranges = ARRAY_SIZE(mrfld_gpio_ranges);

View File

@ -23,7 +23,6 @@ enum {
TH_PCI_RTIT_BAR = 4,
};
#define BAR_MASK (BIT(TH_PCI_CONFIG_BAR) | BIT(TH_PCI_STH_SW_BAR))
#define PCI_REG_NPKDSC 0x80
#define NPKDSC_TSACT BIT(5)
@ -83,10 +82,16 @@ static int intel_th_pci_probe(struct pci_dev *pdev,
if (err)
return err;
err = pcim_iomap_regions_request_all(pdev, BAR_MASK, DRIVER_NAME);
err = pcim_request_all_regions(pdev, DRIVER_NAME);
if (err)
return err;
if (!pcim_iomap(pdev, TH_PCI_CONFIG_BAR, 0))
return -ENOMEM;
if (!pcim_iomap(pdev, TH_PCI_STH_SW_BAR, 0))
return -ENOMEM;
if (pdev->resource[TH_PCI_RTIT_BAR].start) {
resource[TH_MMIO_RTIT] = pdev->resource[TH_PCI_RTIT_BAR];
r++;

View File

@ -239,12 +239,11 @@ static int cavium_ptp_probe(struct pci_dev *pdev,
if (err)
goto error_free;
err = pcim_iomap_regions(pdev, 1 << PCI_PTP_BAR_NO, pci_name(pdev));
clock->reg_base = pcim_iomap_region(pdev, PCI_PTP_BAR_NO, pci_name(pdev));
err = PTR_ERR_OR_ZERO(clock->reg_base);
if (err)
goto error_free;
clock->reg_base = pcim_iomap_table(pdev)[PCI_PTP_BAR_NO];
spin_lock_init(&clock->spin_lock);
cc = &clock->cycle_counter;
@ -292,7 +291,7 @@ static int cavium_ptp_probe(struct pci_dev *pdev,
clock_cfg = readq(clock->reg_base + PTP_CLOCK_CFG);
clock_cfg &= ~PTP_CLOCK_CFG_PTP_EN;
writeq(clock_cfg, clock->reg_base + PTP_CLOCK_CFG);
pcim_iounmap_regions(pdev, 1 << PCI_PTP_BAR_NO);
pcim_iounmap_region(pdev, PCI_PTP_BAR_NO);
error_free:
devm_kfree(dev, clock);

View File

@ -3535,7 +3535,6 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev,
struct iwl_trans_pcie *trans_pcie, **priv;
struct iwl_trans *trans;
int ret, addr_size;
void __iomem * const *table;
u32 bar0;
/* reassign our BAR 0 if invalid due to possible runtime PM races */
@ -3661,22 +3660,15 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev,
}
}
ret = pcim_iomap_regions_request_all(pdev, BIT(0), DRV_NAME);
ret = pcim_request_all_regions(pdev, DRV_NAME);
if (ret) {
dev_err(&pdev->dev, "pcim_iomap_regions_request_all failed\n");
dev_err(&pdev->dev, "Requesting all PCI BARs failed.\n");
goto out_no_pci;
}
table = pcim_iomap_table(pdev);
if (!table) {
dev_err(&pdev->dev, "pcim_iomap_table failed\n");
ret = -ENOMEM;
goto out_no_pci;
}
trans_pcie->hw_base = table[0];
trans_pcie->hw_base = pcim_iomap(pdev, 0, 0);
if (!trans_pcie->hw_base) {
dev_err(&pdev->dev, "couldn't find IO mem in first BAR\n");
dev_err(&pdev->dev, "Could not ioremap PCI BAR 0.\n");
ret = -ENODEV;
goto out_no_pci;
}

View File

@ -2671,15 +2671,20 @@ static int idt_init_pci(struct idt_ntb_dev *ndev)
*/
pci_set_master(pdev);
/* Request all BARs resources and map BAR0 only */
ret = pcim_iomap_regions_request_all(pdev, 1, NTB_NAME);
/* Request all BARs resources */
ret = pcim_request_all_regions(pdev, NTB_NAME);
if (ret != 0) {
dev_err(&pdev->dev, "Failed to request resources\n");
goto err_clear_master;
}
/* Retrieve virtual address of BAR0 - PCI configuration space */
ndev->cfgspc = pcim_iomap_table(pdev)[0];
/* ioremap BAR0 - PCI configuration space */
ndev->cfgspc = pcim_iomap(pdev, 0, 0);
if (!ndev->cfgspc) {
dev_err(&pdev->dev, "Failed to ioremap BAR 0\n");
ret = -ENOMEM;
goto err_clear_master;
}
/* Put the IDT driver data pointer to the PCI-device private pointer */
pci_set_drvdata(pdev, ndev);

View File

@ -173,6 +173,15 @@ config PCI_PASID
If unsure, say N.
config PCIE_TPH
bool "TLP Processing Hints"
help
This option adds support for PCIe TLP Processing Hints (TPH).
TPH allows endpoint devices to provide optimization hints, such as
desired caching behavior, for requests that target memory space.
These hints, called Steering Tags, can empower the system hardware
to optimize the utilization of platform resources.
config PCI_P2PDMA
bool "PCI peer-to-peer transfer support"
depends on ZONE_DEVICE
@ -305,6 +314,6 @@ source "drivers/pci/hotplug/Kconfig"
source "drivers/pci/controller/Kconfig"
source "drivers/pci/endpoint/Kconfig"
source "drivers/pci/switch/Kconfig"
source "drivers/pci/pwrctl/Kconfig"
source "drivers/pci/pwrctrl/Kconfig"
endif

View File

@ -9,7 +9,7 @@ obj-$(CONFIG_PCI) += access.o bus.o probe.o host-bridge.o \
obj-$(CONFIG_PCI) += msi/
obj-$(CONFIG_PCI) += pcie/
obj-$(CONFIG_PCI) += pwrctl/
obj-$(CONFIG_PCI) += pwrctrl/
ifdef CONFIG_PCI
obj-$(CONFIG_PROC_FS) += proc.o
@ -36,6 +36,7 @@ obj-$(CONFIG_VGA_ARB) += vgaarb.o
obj-$(CONFIG_PCI_DOE) += doe.o
obj-$(CONFIG_PCI_DYNAMIC_OF_NODES) += of_property.o
obj-$(CONFIG_PCI_NPEM) += npem.o
obj-$(CONFIG_PCIE_TPH) += tph.o
# Endpoint library must be initialized before its users
obj-$(CONFIG_PCI_ENDPOINT) += endpoint/

View File

@ -13,11 +13,24 @@
#include <linux/ioport.h>
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/proc_fs.h>
#include <linux/slab.h>
#include "pci.h"
/*
* The first PCI_BRIDGE_RESOURCE_NUM PCI bus resources (those that correspond
* to P2P or CardBus bridge windows) go in a table. Additional ones (for
* buses below host bridges or subtractive decode bridges) go in the list.
* Use pci_bus_for_each_resource() to iterate through all the resources.
*/
struct pci_bus_resource {
struct list_head list;
struct resource *res;
};
void pci_add_resource_offset(struct list_head *resources, struct resource *res,
resource_size_t offset)
{
@ -46,8 +59,7 @@ void pci_free_resource_list(struct list_head *resources)
}
EXPORT_SYMBOL(pci_free_resource_list);
void pci_bus_add_resource(struct pci_bus *bus, struct resource *res,
unsigned int flags)
void pci_bus_add_resource(struct pci_bus *bus, struct resource *res)
{
struct pci_bus_resource *bus_res;
@ -58,7 +70,6 @@ void pci_bus_add_resource(struct pci_bus *bus, struct resource *res,
}
bus_res->res = res;
bus_res->flags = flags;
list_add_tail(&bus_res->list, &bus->resources);
}
@ -320,6 +331,47 @@ void __weak pcibios_resource_survey_bus(struct pci_bus *bus) { }
void __weak pcibios_bus_add_device(struct pci_dev *pdev) { }
/*
* Create pwrctrl devices (if required) for the PCI devices to handle the power
* state.
*/
static void pci_pwrctrl_create_devices(struct pci_dev *dev)
{
struct device_node *np = dev_of_node(&dev->dev);
struct device *parent = &dev->dev;
struct platform_device *pdev;
/*
* First ensure that we are starting from a PCI bridge and it has a
* corresponding devicetree node.
*/
if (np && pci_is_bridge(dev)) {
/*
* Now look for the child PCI device nodes and create pwrctrl
* devices for them. The pwrctrl device drivers will manage the
* power state of the devices.
*/
for_each_available_child_of_node_scoped(np, child) {
/*
* First check whether the pwrctrl device really
* needs to be created or not. This is decided
* based on at least one of the power supplies
* being defined in the devicetree node of the
* device.
*/
if (!of_pci_supply_present(child)) {
pci_dbg(dev, "skipping OF node: %s\n", child->name);
return;
}
/* Now create the pwrctrl device */
pdev = of_platform_device_create(child, NULL, parent);
if (!pdev)
pci_err(dev, "failed to create OF node: %s\n", child->name);
}
}
}
/**
* pci_bus_add_device - start driver for a single device
* @dev: device to add
@ -329,6 +381,7 @@ void __weak pcibios_bus_add_device(struct pci_dev *pdev) { }
void pci_bus_add_device(struct pci_dev *dev)
{
struct device_node *dn = dev->dev.of_node;
struct platform_device *pdev;
int retval;
/*
@ -343,20 +396,28 @@ void pci_bus_add_device(struct pci_dev *dev)
pci_proc_attach_device(dev);
pci_bridge_d3_update(dev);
pci_pwrctrl_create_devices(dev);
/*
* If the PCI device is associated with a pwrctrl device with a
* power supply, create a device link between the PCI device and
* pwrctrl device. This ensures that pwrctrl drivers are probed
* before PCI client drivers.
*/
pdev = of_find_device_by_node(dn);
if (pdev && of_pci_supply_present(dn)) {
if (!device_link_add(&dev->dev, &pdev->dev,
DL_FLAG_AUTOREMOVE_CONSUMER))
pci_err(dev, "failed to add device link to power control device %s\n",
pdev->name);
}
dev->match_driver = !dn || of_device_is_available(dn);
retval = device_attach(&dev->dev);
if (retval < 0 && retval != -EPROBE_DEFER)
pci_warn(dev, "device attach failed (%d)\n", retval);
pci_dev_assign_added(dev, true);
if (dev_of_node(&dev->dev) && pci_is_bridge(dev)) {
retval = of_platform_populate(dev_of_node(&dev->dev), NULL, NULL,
&dev->dev);
if (retval)
pci_err(dev, "failed to populate child OF nodes (%d)\n",
retval);
}
pci_dev_assign_added(dev);
}
EXPORT_SYMBOL_GPL(pci_bus_add_device);
@ -389,41 +450,23 @@ void pci_bus_add_devices(const struct pci_bus *bus)
}
EXPORT_SYMBOL(pci_bus_add_devices);
static void __pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void *),
void *userdata, bool locked)
static int __pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void *),
void *userdata)
{
struct pci_dev *dev;
struct pci_bus *bus;
struct list_head *next;
int retval;
int ret = 0;
bus = top;
if (!locked)
down_read(&pci_bus_sem);
next = top->devices.next;
for (;;) {
if (next == &bus->devices) {
/* end of this bus, go up or finish */
if (bus == top)
break;
next = bus->self->bus_list.next;
bus = bus->self->bus;
continue;
}
dev = list_entry(next, struct pci_dev, bus_list);
if (dev->subordinate) {
/* this is a pci-pci bridge, do its devices next */
next = dev->subordinate->devices.next;
bus = dev->subordinate;
} else
next = dev->bus_list.next;
retval = cb(dev, userdata);
if (retval)
list_for_each_entry(dev, &top->devices, bus_list) {
ret = cb(dev, userdata);
if (ret)
break;
if (dev->subordinate) {
ret = __pci_walk_bus(dev->subordinate, cb, userdata);
if (ret)
break;
}
}
if (!locked)
up_read(&pci_bus_sem);
return ret;
}
/**
@ -441,7 +484,9 @@ static void __pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void
*/
void pci_walk_bus(struct pci_bus *top, int (*cb)(struct pci_dev *, void *), void *userdata)
{
__pci_walk_bus(top, cb, userdata, false);
down_read(&pci_bus_sem);
__pci_walk_bus(top, cb, userdata);
up_read(&pci_bus_sem);
}
EXPORT_SYMBOL_GPL(pci_walk_bus);
@ -449,9 +494,8 @@ void pci_walk_bus_locked(struct pci_bus *top, int (*cb)(struct pci_dev *, void *
{
lockdep_assert_held(&pci_bus_sem);
__pci_walk_bus(top, cb, userdata, true);
__pci_walk_bus(top, cb, userdata);
}
EXPORT_SYMBOL_GPL(pci_walk_bus_locked);
struct pci_bus *pci_bus_get(struct pci_bus *bus)
{

View File

@ -386,6 +386,13 @@ static const struct j721e_pcie_data j784s4_pcie_ep_data = {
.max_lanes = 4,
};
static const struct j721e_pcie_data j722s_pcie_rc_data = {
.mode = PCI_MODE_RC,
.linkdown_irq_regfield = J7200_LINK_DOWN,
.byte_access_allowed = true,
.max_lanes = 1,
};
static const struct of_device_id of_j721e_pcie_match[] = {
{
.compatible = "ti,j721e-pcie-host",
@ -419,6 +426,10 @@ static const struct of_device_id of_j721e_pcie_match[] = {
.compatible = "ti,j784s4-pcie-ep",
.data = &j784s4_pcie_ep_data,
},
{
.compatible = "ti,j722s-pcie-host",
.data = &j722s_pcie_rc_data,
},
{},
};
@ -572,15 +583,14 @@ static int j721e_pcie_probe(struct platform_device *pdev)
pcie->refclk = clk;
/*
* The "Power Sequencing and Reset Signal Timings" table of the
* PCI Express Card Electromechanical Specification, Revision
* 5.1, Section 2.9.2, Symbol "T_PERST-CLK", indicates PERST#
* should be deasserted after minimum of 100us once REFCLK is
* stable. The REFCLK to the connector in RC mode is selected
* while enabling the PHY. So deassert PERST# after 100 us.
* Section 2.2 of the PCI Express Card Electromechanical
* Specification (Revision 5.1) mandates that the deassertion
* of the PERST# signal should be delayed by 100 ms (TPVPERL).
* This shall ensure that the power and the reference clock
* are stable.
*/
if (gpiod) {
fsleep(PCIE_T_PERST_CLK_US);
msleep(PCIE_T_PVPERL_MS);
gpiod_set_value_cansleep(gpiod, 1);
}
@ -671,15 +681,14 @@ static int j721e_pcie_resume_noirq(struct device *dev)
return ret;
/*
* The "Power Sequencing and Reset Signal Timings" table of the
* PCI Express Card Electromechanical Specification, Revision
* 5.1, Section 2.9.2, Symbol "T_PERST-CLK", indicates PERST#
* should be deasserted after minimum of 100us once REFCLK is
* stable. The REFCLK to the connector in RC mode is selected
* while enabling the PHY. So deassert PERST# after 100 us.
* Section 2.2 of the PCI Express Card Electromechanical
* Specification (Revision 5.1) mandates that the deassertion
* of the PERST# signal should be delayed by 100 ms (TPVPERL).
* This shall ensure that the power and the reference clock
* are stable.
*/
if (pcie->reset_gpio) {
fsleep(PCIE_T_PERST_CLK_US);
msleep(PCIE_T_PVPERL_MS);
gpiod_set_value_cansleep(pcie->reset_gpio, 1);
}
@ -712,7 +721,7 @@ static DEFINE_NOIRQ_DEV_PM_OPS(j721e_pcie_pm_ops,
static struct platform_driver j721e_pcie_driver = {
.probe = j721e_pcie_probe,
.remove_new = j721e_pcie_remove,
.remove = j721e_pcie_remove,
.driver = {
.name = "j721e-pcie",
.of_match_table = of_j721e_pcie_match,

View File

@ -197,7 +197,7 @@ int cdns_pcie_init_phy(struct device *dev, struct cdns_pcie *pcie)
phy_count = of_property_count_strings(np, "phy-names");
if (phy_count < 1) {
dev_err(dev, "no phy-names. PHY will not be initialized\n");
dev_info(dev, "no \"phy-names\" property found; PHY will not be initialized\n");
pcie->phy_count = 0;
return 0;
}
@ -260,7 +260,7 @@ static int cdns_pcie_resume_noirq(struct device *dev)
ret = cdns_pcie_enable_phy(pcie);
if (ret) {
dev_err(dev, "failed to enable phy\n");
dev_err(dev, "failed to enable PHY\n");
return ret;
}

View File

@ -383,7 +383,7 @@ static const struct of_device_id exynos_pcie_of_match[] = {
static struct platform_driver exynos_pcie_driver = {
.probe = exynos_pcie_probe,
.remove_new = exynos_pcie_remove,
.remove = exynos_pcie_remove,
.driver = {
.name = "exynos-pcie",
.of_match_table = exynos_pcie_of_match,

View File

@ -82,6 +82,11 @@ enum imx_pcie_variants {
#define IMX_PCIE_FLAG_HAS_SERDES BIT(6)
#define IMX_PCIE_FLAG_SUPPORT_64BIT BIT(7)
#define IMX_PCIE_FLAG_CPU_ADDR_FIXUP BIT(8)
/*
* Because of ERR005723 (PCIe does not support L2 power down) we need to
* workaround suspend resume on some devices which are affected by this errata.
*/
#define IMX_PCIE_FLAG_BROKEN_SUSPEND BIT(9)
#define imx_check_flag(pci, val) (pci->drvdata->flags & val)
@ -1237,9 +1242,19 @@ static int imx_pcie_suspend_noirq(struct device *dev)
return 0;
imx_pcie_msi_save_restore(imx_pcie, true);
imx_pcie_pm_turnoff(imx_pcie);
imx_pcie_stop_link(imx_pcie->pci);
imx_pcie_host_exit(pp);
if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_BROKEN_SUSPEND)) {
/*
* The minimum for a workaround would be to set PERST# and to
* set the PCIE_TEST_PD flag. However, we can also disable the
* clock which saves some power.
*/
imx_pcie_assert_core_reset(imx_pcie);
imx_pcie->drvdata->enable_ref_clk(imx_pcie, false);
} else {
imx_pcie_pm_turnoff(imx_pcie);
imx_pcie_stop_link(imx_pcie->pci);
imx_pcie_host_exit(pp);
}
return 0;
}
@ -1253,14 +1268,32 @@ static int imx_pcie_resume_noirq(struct device *dev)
if (!(imx_pcie->drvdata->flags & IMX_PCIE_FLAG_SUPPORTS_SUSPEND))
return 0;
ret = imx_pcie_host_init(pp);
if (ret)
return ret;
imx_pcie_msi_save_restore(imx_pcie, false);
dw_pcie_setup_rc(pp);
if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_BROKEN_SUSPEND)) {
ret = imx_pcie->drvdata->enable_ref_clk(imx_pcie, true);
if (ret)
return ret;
ret = imx_pcie_deassert_core_reset(imx_pcie);
if (ret)
return ret;
/*
* Using PCIE_TEST_PD seems to disable MSI and powers down the
* root complex. This is why we have to setup the rc again and
* why we have to restore the MSI register.
*/
ret = dw_pcie_setup_rc(&imx_pcie->pci->pp);
if (ret)
return ret;
imx_pcie_msi_save_restore(imx_pcie, false);
} else {
ret = imx_pcie_host_init(pp);
if (ret)
return ret;
imx_pcie_msi_save_restore(imx_pcie, false);
dw_pcie_setup_rc(pp);
if (imx_pcie->link_is_up)
imx_pcie_start_link(imx_pcie->pci);
if (imx_pcie->link_is_up)
imx_pcie_start_link(imx_pcie->pci);
}
return 0;
}
@ -1485,7 +1518,9 @@ static const struct imx_pcie_drvdata drvdata[] = {
[IMX6Q] = {
.variant = IMX6Q,
.flags = IMX_PCIE_FLAG_IMX_PHY |
IMX_PCIE_FLAG_IMX_SPEED_CHANGE,
IMX_PCIE_FLAG_IMX_SPEED_CHANGE |
IMX_PCIE_FLAG_BROKEN_SUSPEND |
IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
.dbi_length = 0x200,
.gpr = "fsl,imx6q-iomuxc-gpr",
.clk_names = imx6q_clks,

View File

@ -455,6 +455,17 @@ static void __iomem *ks_pcie_other_map_bus(struct pci_bus *bus,
struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
u32 reg;
/*
* Checking whether the link is up here is a last line of defense
* against platforms that forward errors on the system bus as
* SError upon PCI configuration transactions issued when the link
* is down. This check is racy by definition and does not stop
* the system from triggering an SError if the link goes down
* after this check is performed.
*/
if (!dw_pcie_link_up(pci))
return NULL;
reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) |
CFG_FUNC(PCI_FUNC(devfn));
if (!pci_is_root_bus(bus->parent))
@ -1093,6 +1104,7 @@ static int ks_pcie_am654_set_mode(struct device *dev,
static const struct ks_pcie_of_data ks_pcie_rc_of_data = {
.host_ops = &ks_pcie_host_ops,
.mode = DW_PCIE_RC_TYPE,
.version = DW_PCIE_VER_365A,
};
@ -1363,7 +1375,7 @@ static void ks_pcie_remove(struct platform_device *pdev)
static struct platform_driver ks_pcie_driver = {
.probe = ks_pcie_probe,
.remove_new = ks_pcie_remove,
.remove = ks_pcie_remove,
.driver = {
.name = "keystone-pcie",
.of_match_table = ks_pcie_of_match,

View File

@ -632,7 +632,7 @@ MODULE_DEVICE_TABLE(of, bt1_pcie_of_match);
static struct platform_driver bt1_pcie_driver = {
.probe = bt1_pcie_probe,
.remove_new = bt1_pcie_remove,
.remove = bt1_pcie_remove,
.driver = {
.name = "bt1-pcie",
.of_match_table = bt1_pcie_of_match,

View File

@ -268,6 +268,20 @@ static int dw_pcie_find_index(struct dw_pcie_ep *ep, phys_addr_t addr,
return -EINVAL;
}
static u64 dw_pcie_ep_align_addr(struct pci_epc *epc, u64 pci_addr,
size_t *pci_size, size_t *offset)
{
struct dw_pcie_ep *ep = epc_get_drvdata(epc);
struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
u64 mask = pci->region_align - 1;
size_t ofst = pci_addr & mask;
*pci_size = ALIGN(ofst + *pci_size, epc->mem->window.page_size);
*offset = ofst;
return pci_addr & ~mask;
}
static void dw_pcie_ep_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t addr)
{
@ -280,6 +294,7 @@ static void dw_pcie_ep_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
if (ret < 0)
return;
ep->outbound_addr[atu_index] = 0;
dw_pcie_disable_atu(pci, PCIE_ATU_REGION_DIR_OB, atu_index);
clear_bit(atu_index, ep->ob_window_map);
}
@ -444,6 +459,7 @@ static const struct pci_epc_ops epc_ops = {
.write_header = dw_pcie_ep_write_header,
.set_bar = dw_pcie_ep_set_bar,
.clear_bar = dw_pcie_ep_clear_bar,
.align_addr = dw_pcie_ep_align_addr,
.map_addr = dw_pcie_ep_map_addr,
.unmap_addr = dw_pcie_ep_unmap_addr,
.set_msi = dw_pcie_ep_set_msi,
@ -488,7 +504,8 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
u32 msg_addr_lower, msg_addr_upper, reg;
struct dw_pcie_ep_func *ep_func;
struct pci_epc *epc = ep->epc;
unsigned int aligned_offset;
size_t map_size = sizeof(u32);
size_t offset;
u16 msg_ctrl, msg_data;
bool has_upper;
u64 msg_addr;
@ -516,14 +533,13 @@ int dw_pcie_ep_raise_msi_irq(struct dw_pcie_ep *ep, u8 func_no,
}
msg_addr = ((u64)msg_addr_upper) << 32 | msg_addr_lower;
aligned_offset = msg_addr & (epc->mem->window.page_size - 1);
msg_addr = ALIGN_DOWN(msg_addr, epc->mem->window.page_size);
msg_addr = dw_pcie_ep_align_addr(epc, msg_addr, &map_size, &offset);
ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr,
epc->mem->window.page_size);
map_size);
if (ret)
return ret;
writel(msg_data | (interrupt_num - 1), ep->msi_mem + aligned_offset);
writel(msg_data | (interrupt_num - 1), ep->msi_mem + offset);
dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys);
@ -574,8 +590,9 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
struct pci_epf_msix_tbl *msix_tbl;
struct dw_pcie_ep_func *ep_func;
struct pci_epc *epc = ep->epc;
size_t map_size = sizeof(u32);
size_t offset;
u32 reg, msg_data, vec_ctrl;
unsigned int aligned_offset;
u32 tbl_offset;
u64 msg_addr;
int ret;
@ -600,14 +617,13 @@ int dw_pcie_ep_raise_msix_irq(struct dw_pcie_ep *ep, u8 func_no,
return -EPERM;
}
aligned_offset = msg_addr & (epc->mem->window.page_size - 1);
msg_addr = ALIGN_DOWN(msg_addr, epc->mem->window.page_size);
msg_addr = dw_pcie_ep_align_addr(epc, msg_addr, &map_size, &offset);
ret = dw_pcie_ep_map_addr(epc, func_no, 0, ep->msi_mem_phys, msg_addr,
epc->mem->window.page_size);
map_size);
if (ret)
return ret;
writel(msg_data, ep->msi_mem + aligned_offset);
writel(msg_data, ep->msi_mem + offset);
dw_pcie_ep_unmap_addr(epc, func_no, 0, ep->msi_mem_phys);
@ -689,7 +705,7 @@ static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
* for 1 MB BAR size only.
*/
for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0);
dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, BIT(4));
}
dw_pcie_setup(pci);

View File

@ -474,8 +474,8 @@ int dw_pcie_host_init(struct dw_pcie_rp *pp)
if (pci_msi_enabled()) {
pp->has_msi_ctrl = !(pp->ops->msi_init ||
of_property_read_bool(np, "msi-parent") ||
of_property_read_bool(np, "msi-map"));
of_property_present(np, "msi-parent") ||
of_property_present(np, "msi-map"));
/*
* For the has_msi_ctrl case the default assignment is handled

View File

@ -439,7 +439,7 @@ MODULE_DEVICE_TABLE(of, histb_pcie_of_match);
static struct platform_driver histb_pcie_platform_driver = {
.probe = histb_pcie_probe,
.remove_new = histb_pcie_remove,
.remove = histb_pcie_remove,
.driver = {
.name = "histb-pcie",
.of_match_table = histb_pcie_of_match,

View File

@ -443,7 +443,7 @@ static const struct of_device_id of_intel_pcie_match[] = {
static struct platform_driver intel_pcie_driver = {
.probe = intel_pcie_probe,
.remove_new = intel_pcie_remove,
.remove = intel_pcie_remove,
.driver = {
.name = "intel-gw-pcie",
.of_match_table = of_intel_pcie_match,

View File

@ -769,7 +769,7 @@ static int kirin_pcie_probe(struct platform_device *pdev)
static struct platform_driver kirin_pcie_driver = {
.probe = kirin_pcie_probe,
.remove_new = kirin_pcie_remove,
.remove = kirin_pcie_remove,
.driver = {
.name = "kirin-pcie",
.of_match_table = kirin_pcie_match,

View File

@ -396,6 +396,10 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
return ret;
}
/* Perform cleanup that requires refclk */
pci_epc_deinit_notify(pci->ep.epc);
dw_pcie_ep_cleanup(&pci->ep);
/* Assert WAKE# to RC to indicate device is ready */
gpiod_set_value_cansleep(pcie_ep->wake, 1);
usleep_range(WAKE_DELAY_US, WAKE_DELAY_US + 500);
@ -540,8 +544,6 @@ static void qcom_pcie_perst_assert(struct dw_pcie *pci)
{
struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
pci_epc_deinit_notify(pci->ep.epc);
dw_pcie_ep_cleanup(&pci->ep);
qcom_pcie_disable_resources(pcie_ep);
pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED;
}
@ -937,7 +939,7 @@ MODULE_DEVICE_TABLE(of, qcom_pcie_ep_match);
static struct platform_driver qcom_pcie_ep_driver = {
.probe = qcom_pcie_ep_probe,
.remove_new = qcom_pcie_ep_remove,
.remove = qcom_pcie_ep_remove,
.driver = {
.name = "qcom-pcie-ep",
.of_match_table = qcom_pcie_ep_match,

View File

@ -133,6 +133,7 @@
/* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */
#define PARF_INT_ALL_LINK_UP BIT(13)
#define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23)
/* PARF_NO_SNOOP_OVERIDE register fields */
#define WR_NO_SNOOP_OVERIDE_EN BIT(1)
@ -1364,6 +1365,16 @@ static const struct qcom_pcie_ops ops_1_9_0 = {
.config_sid = qcom_pcie_config_sid_1_9_0,
};
/* Qcom IP rev.: 1.21.0 Synopsys IP rev.: 5.60a */
static const struct qcom_pcie_ops ops_1_21_0 = {
.get_resources = qcom_pcie_get_resources_2_7_0,
.init = qcom_pcie_init_2_7_0,
.post_init = qcom_pcie_post_init_2_7_0,
.host_post_init = qcom_pcie_host_post_init_2_7_0,
.deinit = qcom_pcie_deinit_2_7_0,
.ltssm_enable = qcom_pcie_2_3_2_ltssm_enable,
};
/* Qcom IP rev.: 2.9.0 Synopsys IP rev.: 5.00a */
static const struct qcom_pcie_ops ops_2_9_0 = {
.get_resources = qcom_pcie_get_resources_2_9_0,
@ -1411,7 +1422,7 @@ static const struct qcom_pcie_cfg cfg_2_9_0 = {
};
static const struct qcom_pcie_cfg cfg_sc8280xp = {
.ops = &ops_1_9_0,
.ops = &ops_1_21_0,
.no_l0s = true,
};
@ -1716,7 +1727,8 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_host_deinit;
}
writel_relaxed(PARF_INT_ALL_LINK_UP, pcie->parf + PARF_INT_ALL_MASK);
writel_relaxed(PARF_INT_ALL_LINK_UP | PARF_INT_MSI_DEV_0_7,
pcie->parf + PARF_INT_ALL_MASK);
}
qcom_pcie_icc_opp_update(pcie);
@ -1828,6 +1840,7 @@ static const struct of_device_id qcom_pcie_match[] = {
{ .compatible = "qcom,pcie-ipq8064-v2", .data = &cfg_2_1_0 },
{ .compatible = "qcom,pcie-ipq8074", .data = &cfg_2_3_3 },
{ .compatible = "qcom,pcie-ipq8074-gen3", .data = &cfg_2_9_0 },
{ .compatible = "qcom,pcie-ipq9574", .data = &cfg_2_9_0 },
{ .compatible = "qcom,pcie-msm8996", .data = &cfg_2_3_2 },
{ .compatible = "qcom,pcie-qcs404", .data = &cfg_2_4_0 },
{ .compatible = "qcom,pcie-sa8540p", .data = &cfg_sc8280xp },
@ -1843,7 +1856,7 @@ static const struct of_device_id qcom_pcie_match[] = {
{ .compatible = "qcom,pcie-sm8450-pcie0", .data = &cfg_1_9_0 },
{ .compatible = "qcom,pcie-sm8450-pcie1", .data = &cfg_1_9_0 },
{ .compatible = "qcom,pcie-sm8550", .data = &cfg_1_9_0 },
{ .compatible = "qcom,pcie-x1e80100", .data = &cfg_1_9_0 },
{ .compatible = "qcom,pcie-x1e80100", .data = &cfg_sc8280xp },
{ }
};

View File

@ -775,7 +775,7 @@ static struct platform_driver rcar_gen4_pcie_driver = {
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
.probe = rcar_gen4_pcie_probe,
.remove_new = rcar_gen4_pcie_remove,
.remove = rcar_gen4_pcie_remove,
};
module_platform_driver(rcar_gen4_pcie_driver);

View File

@ -1704,9 +1704,6 @@ static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie)
if (ret)
dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret);
pci_epc_deinit_notify(pcie->pci.ep.epc);
dw_pcie_ep_cleanup(&pcie->pci.ep);
reset_control_assert(pcie->core_rst);
tegra_pcie_disable_phy(pcie);
@ -1785,6 +1782,10 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
goto fail_phy;
}
/* Perform cleanup that requires refclk */
pci_epc_deinit_notify(pcie->pci.ep.epc);
dw_pcie_ep_cleanup(&pcie->pci.ep);
/* Clear any stale interrupt statuses */
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
@ -2493,7 +2494,7 @@ static const struct dev_pm_ops tegra_pcie_dw_pm_ops = {
static struct platform_driver tegra_pcie_dw_driver = {
.probe = tegra_pcie_dw_probe,
.remove_new = tegra_pcie_dw_remove,
.remove = tegra_pcie_dw_remove,
.shutdown = tegra_pcie_dw_shutdown,
.driver = {
.name = "tegra194-pcie",

View File

@ -1996,7 +1996,7 @@ static struct platform_driver advk_pcie_driver = {
.of_match_table = advk_pcie_of_match_table,
},
.probe = advk_pcie_probe,
.remove_new = advk_pcie_remove,
.remove = advk_pcie_remove,
};
module_platform_driver(advk_pcie_driver);

View File

@ -82,7 +82,7 @@ static struct platform_driver gen_pci_driver = {
.of_match_table = gen_pci_of_match,
},
.probe = pci_host_common_probe,
.remove_new = pci_host_common_remove,
.remove = pci_host_common_remove,
};
module_platform_driver(gen_pci_driver);

View File

@ -1727,7 +1727,7 @@ static struct platform_driver mvebu_pcie_driver = {
.pm = &mvebu_pcie_pm_ops,
},
.probe = mvebu_pcie_probe,
.remove_new = mvebu_pcie_remove,
.remove = mvebu_pcie_remove,
};
module_platform_driver(mvebu_pcie_driver);

View File

@ -1460,7 +1460,7 @@ static int tegra_pcie_get_resources(struct tegra_pcie *pcie)
pcie->cs = *res;
/* constrain configuration space to 4 KiB */
pcie->cs.end = pcie->cs.start + SZ_4K - 1;
resource_set_size(&pcie->cs, SZ_4K);
pcie->cfg = devm_ioremap_resource(dev, &pcie->cs);
if (IS_ERR(pcie->cfg)) {
@ -2800,6 +2800,6 @@ static struct platform_driver tegra_pcie_driver = {
.pm = &tegra_pcie_pm_ops,
},
.probe = tegra_pcie_probe,
.remove_new = tegra_pcie_remove,
.remove = tegra_pcie_remove,
};
module_platform_driver(tegra_pcie_driver);

View File

@ -400,9 +400,9 @@ static int thunder_pem_acpi_init(struct pci_config_window *cfg)
* Reserve 64K size PEM specific resources. The full 16M range
* size is required for thunder_pem_init() call.
*/
res_pem->end = res_pem->start + SZ_64K - 1;
resource_set_size(res_pem, SZ_64K);
thunder_pem_reserve_range(dev, root->segment, res_pem);
res_pem->end = res_pem->start + SZ_16M - 1;
resource_set_size(res_pem, SZ_16M);
/* Reserve PCI configuration space as well. */
thunder_pem_reserve_range(dev, root->segment, &cfg->res);

View File

@ -518,7 +518,7 @@ static struct platform_driver xgene_msi_driver = {
.of_match_table = xgene_msi_match_table,
},
.probe = xgene_msi_probe,
.remove_new = xgene_msi_remove,
.remove = xgene_msi_remove,
};
static int __init xgene_pcie_msi_init(void)

View File

@ -267,7 +267,7 @@ static struct platform_driver altera_msi_driver = {
.of_match_table = altera_msi_of_match,
},
.probe = altera_msi_probe,
.remove_new = altera_msi_remove,
.remove = altera_msi_remove,
};
static int __init altera_msi_init(void)

View File

@ -815,10 +815,10 @@ static void altera_pcie_remove(struct platform_device *pdev)
}
static struct platform_driver altera_pcie_driver = {
.probe = altera_pcie_probe,
.remove_new = altera_pcie_remove,
.probe = altera_pcie_probe,
.remove = altera_pcie_remove,
.driver = {
.name = "altera-pcie",
.name = "altera-pcie",
.of_match_table = altera_pcie_of_match,
},
};

View File

@ -1928,7 +1928,7 @@ static const struct dev_pm_ops brcm_pcie_pm_ops = {
static struct platform_driver brcm_pcie_driver = {
.probe = brcm_pcie_probe,
.remove_new = brcm_pcie_remove,
.remove = brcm_pcie_remove,
.driver = {
.name = "brcm-pcie",
.of_match_table = brcm_pcie_match,

View File

@ -317,7 +317,7 @@ static struct platform_driver hisi_pcie_error_handler_driver = {
.acpi_match_table = hisi_pcie_acpi_match,
},
.probe = hisi_pcie_error_handler_probe,
.remove_new = hisi_pcie_error_handler_remove,
.remove = hisi_pcie_error_handler_remove,
};
module_platform_driver(hisi_pcie_error_handler_driver);

View File

@ -134,7 +134,7 @@ static struct platform_driver iproc_pltfm_pcie_driver = {
.of_match_table = of_match_ptr(iproc_pcie_of_match_table),
},
.probe = iproc_pltfm_pcie_probe,
.remove_new = iproc_pltfm_pcie_remove,
.remove = iproc_pltfm_pcie_remove,
.shutdown = iproc_pltfm_pcie_shutdown,
};
module_platform_driver(iproc_pltfm_pcie_driver);

View File

@ -28,7 +28,12 @@
#include "../pci.h"
#define PCIE_BASE_CFG_REG 0x14
#define PCIE_BASE_CFG_SPEED GENMASK(15, 8)
#define PCIE_SETTING_REG 0x80
#define PCIE_SETTING_LINK_WIDTH GENMASK(11, 8)
#define PCIE_SETTING_GEN_SUPPORT GENMASK(14, 12)
#define PCIE_PCI_IDS_1 0x9c
#define PCI_CLASS(class) (class << 8)
#define PCIE_RC_MODE BIT(0)
@ -125,6 +130,9 @@
struct mtk_gen3_pcie;
#define PCIE_CONF_LINK2_CTL_STS (PCIE_CFG_OFFSET_ADDR + 0xb0)
#define PCIE_CONF_LINK2_LCR2_LINK_SPEED GENMASK(3, 0)
/**
* struct mtk_gen3_pcie_pdata - differentiate between host generations
* @power_up: pcie power_up callback
@ -160,6 +168,8 @@ struct mtk_msi_set {
* @phy: PHY controller block
* @clks: PCIe clocks
* @num_clks: PCIe clocks count for this port
* @max_link_speed: Maximum link speed (PCIe Gen) for this port
* @num_lanes: Number of PCIe lanes for this port
* @irq: PCIe controller interrupt number
* @saved_irq_state: IRQ enable state saved at suspend time
* @irq_lock: lock protecting IRQ register access
@ -180,6 +190,8 @@ struct mtk_gen3_pcie {
struct phy *phy;
struct clk_bulk_data *clks;
int num_clks;
u8 max_link_speed;
u8 num_lanes;
int irq;
u32 saved_irq_state;
@ -381,11 +393,35 @@ static int mtk_pcie_startup_port(struct mtk_gen3_pcie *pcie)
int err;
u32 val;
/* Set as RC mode */
/* Set as RC mode and set controller PCIe Gen speed restriction, if any */
val = readl_relaxed(pcie->base + PCIE_SETTING_REG);
val |= PCIE_RC_MODE;
if (pcie->max_link_speed) {
val &= ~PCIE_SETTING_GEN_SUPPORT;
/* Can enable link speed support only from Gen2 onwards */
if (pcie->max_link_speed >= 2)
val |= FIELD_PREP(PCIE_SETTING_GEN_SUPPORT,
GENMASK(pcie->max_link_speed - 2, 0));
}
if (pcie->num_lanes) {
val &= ~PCIE_SETTING_LINK_WIDTH;
/* Zero means one lane, each bit activates x2/x4/x8/x16 */
if (pcie->num_lanes > 1)
val |= FIELD_PREP(PCIE_SETTING_LINK_WIDTH,
GENMASK(fls(pcie->num_lanes >> 2), 0));
}
writel_relaxed(val, pcie->base + PCIE_SETTING_REG);
/* Set Link Control 2 (LNKCTL2) speed restriction, if any */
if (pcie->max_link_speed) {
val = readl_relaxed(pcie->base + PCIE_CONF_LINK2_CTL_STS);
val &= ~PCIE_CONF_LINK2_LCR2_LINK_SPEED;
val |= FIELD_PREP(PCIE_CONF_LINK2_LCR2_LINK_SPEED, pcie->max_link_speed);
writel_relaxed(val, pcie->base + PCIE_CONF_LINK2_CTL_STS);
}
/* Set class code */
val = readl_relaxed(pcie->base + PCIE_PCI_IDS_1);
val &= ~GENMASK(31, 8);
@ -813,6 +849,7 @@ static int mtk_pcie_parse_port(struct mtk_gen3_pcie *pcie)
struct device *dev = pcie->dev;
struct platform_device *pdev = to_platform_device(dev);
struct resource *regs;
u32 num_lanes;
regs = platform_get_resource_byname(pdev, IORESOURCE_MEM, "pcie-mac");
if (!regs)
@ -858,6 +895,14 @@ static int mtk_pcie_parse_port(struct mtk_gen3_pcie *pcie)
return pcie->num_clks;
}
ret = of_property_read_u32(dev->of_node, "num-lanes", &num_lanes);
if (ret == 0) {
if (num_lanes == 0 || num_lanes > 16 || (num_lanes != 1 && num_lanes % 2))
dev_warn(dev, "invalid num-lanes, using controller defaults\n");
else
pcie->num_lanes = num_lanes;
}
return 0;
}
@ -1004,9 +1049,21 @@ static void mtk_pcie_power_down(struct mtk_gen3_pcie *pcie)
reset_control_bulk_assert(pcie->soc->phy_resets.num_resets, pcie->phy_resets);
}
static int mtk_pcie_get_controller_max_link_speed(struct mtk_gen3_pcie *pcie)
{
u32 val;
int ret;
val = readl_relaxed(pcie->base + PCIE_BASE_CFG_REG);
val = FIELD_GET(PCIE_BASE_CFG_SPEED, val);
ret = fls(val);
return ret > 0 ? ret : -EINVAL;
}
static int mtk_pcie_setup(struct mtk_gen3_pcie *pcie)
{
int err;
int err, max_speed;
err = mtk_pcie_parse_port(pcie);
if (err)
@ -1031,6 +1088,20 @@ static int mtk_pcie_setup(struct mtk_gen3_pcie *pcie)
if (err)
return err;
err = of_pci_get_max_link_speed(pcie->dev->of_node);
if (err) {
/* Get the maximum speed supported by the controller */
max_speed = mtk_pcie_get_controller_max_link_speed(pcie);
/* Set max_link_speed only if the controller supports it */
if (max_speed >= 0 && max_speed <= err) {
pcie->max_link_speed = err;
dev_info(pcie->dev,
"maximum controller link speed Gen%d, overriding to Gen%u",
max_speed, pcie->max_link_speed);
}
}
/* Try link up */
err = mtk_pcie_startup_port(pcie);
if (err)
@ -1225,7 +1296,7 @@ MODULE_DEVICE_TABLE(of, mtk_pcie_of_match);
static struct platform_driver mtk_pcie_driver = {
.probe = mtk_pcie_probe,
.remove_new = mtk_pcie_remove,
.remove = mtk_pcie_remove,
.driver = {
.name = "mtk-pcie-gen3",
.of_match_table = mtk_pcie_of_match,

View File

@ -1235,7 +1235,7 @@ MODULE_DEVICE_TABLE(of, mtk_pcie_ids);
static struct platform_driver mtk_pcie_driver = {
.probe = mtk_pcie_probe,
.remove_new = mtk_pcie_remove,
.remove = mtk_pcie_remove,
.driver = {
.name = "mtk-pcie",
.of_match_table = mtk_pcie_ids,

View File

@ -541,7 +541,7 @@ MODULE_DEVICE_TABLE(of, mt7621_pcie_ids);
static struct platform_driver mt7621_pcie_driver = {
.probe = mt7621_pcie_probe,
.remove_new = mt7621_pcie_remove,
.remove = mt7621_pcie_remove,
.driver = {
.name = "mt7621-pci",
.of_match_table = mt7621_pcie_ids,

View File

@ -796,8 +796,8 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host)
rcar_pci_write_reg(pcie, 0, PCIEMSIIER);
/*
* Setup MSI data target using RC base address address, which
* is guaranteed to be in the low 32bit range on any R-Car HW.
* Setup MSI data target using RC base address, which is guaranteed
* to be in the low 32bit range on any R-Car HW.
*/
rcar_pci_write_reg(pcie, lower_32_bits(res.start) | MSIFE, PCIEMSIALR);
rcar_pci_write_reg(pcie, upper_32_bits(res.start), PCIEMSIAUR);

View File

@ -10,12 +10,16 @@
#include <linux/configfs.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/iopoll.h>
#include <linux/kernel.h>
#include <linux/irq.h>
#include <linux/of.h>
#include <linux/pci-epc.h>
#include <linux/platform_device.h>
#include <linux/pci-epf.h>
#include <linux/sizes.h>
#include <linux/workqueue.h>
#include "pcie-rockchip.h"
@ -48,6 +52,10 @@ struct rockchip_pcie_ep {
u64 irq_pci_addr;
u8 irq_pci_fn;
u8 irq_pending;
int perst_irq;
bool perst_asserted;
bool link_up;
struct delayed_work link_training;
};
static void rockchip_pcie_clear_ep_ob_atu(struct rockchip_pcie *rockchip,
@ -63,15 +71,25 @@ static void rockchip_pcie_clear_ep_ob_atu(struct rockchip_pcie *rockchip,
ROCKCHIP_PCIE_AT_OB_REGION_DESC1(region));
}
static int rockchip_pcie_ep_ob_atu_num_bits(struct rockchip_pcie *rockchip,
u64 pci_addr, size_t size)
{
int num_pass_bits = fls64(pci_addr ^ (pci_addr + size - 1));
return clamp(num_pass_bits,
ROCKCHIP_PCIE_AT_MIN_NUM_BITS,
ROCKCHIP_PCIE_AT_MAX_NUM_BITS);
}
static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn,
u32 r, u64 cpu_addr, u64 pci_addr,
size_t size)
{
int num_pass_bits = fls64(size - 1);
int num_pass_bits;
u32 addr0, addr1, desc0;
if (num_pass_bits < 8)
num_pass_bits = 8;
num_pass_bits = rockchip_pcie_ep_ob_atu_num_bits(rockchip,
pci_addr, size);
addr0 = ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) |
(lower_32_bits(pci_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR);
@ -228,6 +246,28 @@ static inline u32 rockchip_ob_region(phys_addr_t addr)
return (addr >> ilog2(SZ_1M)) & 0x1f;
}
static u64 rockchip_pcie_ep_align_addr(struct pci_epc *epc, u64 pci_addr,
size_t *pci_size, size_t *addr_offset)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
size_t size = *pci_size;
u64 offset, mask;
int num_bits;
num_bits = rockchip_pcie_ep_ob_atu_num_bits(&ep->rockchip,
pci_addr, size);
mask = (1ULL << num_bits) - 1;
offset = pci_addr & mask;
if (size + offset > SZ_1M)
size = SZ_1M - offset;
*pci_size = ALIGN(offset + size, ROCKCHIP_PCIE_AT_SIZE_ALIGN);
*addr_offset = offset;
return pci_addr & ~mask;
}
static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
phys_addr_t addr, u64 pci_addr,
size_t size)
@ -236,6 +276,9 @@ static int rockchip_pcie_ep_map_addr(struct pci_epc *epc, u8 fn, u8 vfn,
struct rockchip_pcie *pcie = &ep->rockchip;
u32 r = rockchip_ob_region(addr);
if (test_bit(r, &ep->ob_region_map))
return -EBUSY;
rockchip_pcie_prog_ep_ob_atu(pcie, fn, r, addr, pci_addr, size);
set_bit(r, &ep->ob_region_map);
@ -249,13 +292,9 @@ static void rockchip_pcie_ep_unmap_addr(struct pci_epc *epc, u8 fn, u8 vfn,
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
u32 r;
u32 r = rockchip_ob_region(addr);
for (r = 0; r < ep->max_regions; r++)
if (ep->ob_addr[r] == addr)
break;
if (r == ep->max_regions)
if (addr != ep->ob_addr[r] || !test_bit(r, &ep->ob_region_map))
return;
rockchip_pcie_clear_ep_ob_atu(rockchip, r);
@ -351,9 +390,10 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn,
{
struct rockchip_pcie *rockchip = &ep->rockchip;
u32 flags, mme, data, data_mask;
size_t irq_pci_size, offset;
u64 irq_pci_addr;
u8 msi_count;
u64 pci_addr;
u32 r;
/* Check MSI enable bit */
flags = rockchip_pcie_read(&ep->rockchip,
@ -389,18 +429,21 @@ static int rockchip_pcie_ep_send_msi_irq(struct rockchip_pcie_ep *ep, u8 fn,
PCI_MSI_ADDRESS_LO);
/* Set the outbound region if needed. */
if (unlikely(ep->irq_pci_addr != (pci_addr & PCIE_ADDR_MASK) ||
irq_pci_size = ~PCIE_ADDR_MASK + 1;
irq_pci_addr = rockchip_pcie_ep_align_addr(ep->epc,
pci_addr & PCIE_ADDR_MASK,
&irq_pci_size, &offset);
if (unlikely(ep->irq_pci_addr != irq_pci_addr ||
ep->irq_pci_fn != fn)) {
r = rockchip_ob_region(ep->irq_phys_addr);
rockchip_pcie_prog_ep_ob_atu(rockchip, fn, r,
ep->irq_phys_addr,
pci_addr & PCIE_ADDR_MASK,
~PCIE_ADDR_MASK + 1);
ep->irq_pci_addr = (pci_addr & PCIE_ADDR_MASK);
rockchip_pcie_prog_ep_ob_atu(rockchip, fn,
rockchip_ob_region(ep->irq_phys_addr),
ep->irq_phys_addr,
irq_pci_addr, irq_pci_size);
ep->irq_pci_addr = irq_pci_addr;
ep->irq_pci_fn = fn;
}
writew(data, ep->irq_cpu_addr + (pci_addr & ~PCIE_ADDR_MASK));
writew(data, ep->irq_cpu_addr + offset + (pci_addr & ~PCIE_ADDR_MASK));
return 0;
}
@ -432,14 +475,222 @@ static int rockchip_pcie_ep_start(struct pci_epc *epc)
rockchip_pcie_write(rockchip, cfg, PCIE_CORE_PHY_FUNC_CFG);
if (rockchip->perst_gpio)
enable_irq(ep->perst_irq);
/* Enable configuration and start link training */
rockchip_pcie_write(rockchip,
PCIE_CLIENT_LINK_TRAIN_ENABLE |
PCIE_CLIENT_CONF_ENABLE,
PCIE_CLIENT_CONFIG);
if (!rockchip->perst_gpio)
schedule_delayed_work(&ep->link_training, 0);
return 0;
}
static void rockchip_pcie_ep_stop(struct pci_epc *epc)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
if (rockchip->perst_gpio) {
ep->perst_asserted = true;
disable_irq(ep->perst_irq);
}
cancel_delayed_work_sync(&ep->link_training);
/* Stop link training and disable configuration */
rockchip_pcie_write(rockchip,
PCIE_CLIENT_CONF_DISABLE |
PCIE_CLIENT_LINK_TRAIN_DISABLE,
PCIE_CLIENT_CONFIG);
}
static void rockchip_pcie_ep_retrain_link(struct rockchip_pcie *rockchip)
{
u32 status;
status = rockchip_pcie_read(rockchip, PCIE_EP_CONFIG_LCS);
status |= PCI_EXP_LNKCTL_RL;
rockchip_pcie_write(rockchip, status, PCIE_EP_CONFIG_LCS);
}
static bool rockchip_pcie_ep_link_up(struct rockchip_pcie *rockchip)
{
u32 val = rockchip_pcie_read(rockchip, PCIE_CLIENT_BASIC_STATUS1);
return PCIE_LINK_UP(val);
}
static void rockchip_pcie_ep_link_training(struct work_struct *work)
{
struct rockchip_pcie_ep *ep =
container_of(work, struct rockchip_pcie_ep, link_training.work);
struct rockchip_pcie *rockchip = &ep->rockchip;
struct device *dev = rockchip->dev;
u32 val;
int ret;
/* Enable Gen1 training and wait for its completion */
ret = readl_poll_timeout(rockchip->apb_base + PCIE_CORE_CTRL,
val, PCIE_LINK_TRAINING_DONE(val), 50,
LINK_TRAIN_TIMEOUT);
if (ret)
goto again;
/* Make sure that the link is up */
ret = readl_poll_timeout(rockchip->apb_base + PCIE_CLIENT_BASIC_STATUS1,
val, PCIE_LINK_UP(val), 50,
LINK_TRAIN_TIMEOUT);
if (ret)
goto again;
/*
* Check the current speed: if gen2 speed was requested and we are not
* at gen2 speed yet, retrain again for gen2.
*/
val = rockchip_pcie_read(rockchip, PCIE_CORE_CTRL);
if (!PCIE_LINK_IS_GEN2(val) && rockchip->link_gen == 2) {
/* Enable retrain for gen2 */
rockchip_pcie_ep_retrain_link(rockchip);
readl_poll_timeout(rockchip->apb_base + PCIE_CORE_CTRL,
val, PCIE_LINK_IS_GEN2(val), 50,
LINK_TRAIN_TIMEOUT);
}
/* Check again that the link is up */
if (!rockchip_pcie_ep_link_up(rockchip))
goto again;
/*
* If PERST# was asserted while polling the link, do not notify
* the function.
*/
if (ep->perst_asserted)
return;
val = rockchip_pcie_read(rockchip, PCIE_CLIENT_BASIC_STATUS0);
dev_info(dev,
"link up (negotiated speed: %sGT/s, width: x%lu)\n",
(val & PCIE_CLIENT_NEG_LINK_SPEED) ? "5" : "2.5",
((val & PCIE_CLIENT_NEG_LINK_WIDTH_MASK) >>
PCIE_CLIENT_NEG_LINK_WIDTH_SHIFT) << 1);
/* Notify the function */
pci_epc_linkup(ep->epc);
ep->link_up = true;
return;
again:
schedule_delayed_work(&ep->link_training, msecs_to_jiffies(5));
}
static void rockchip_pcie_ep_perst_assert(struct rockchip_pcie_ep *ep)
{
struct rockchip_pcie *rockchip = &ep->rockchip;
dev_dbg(rockchip->dev, "PERST# asserted, link down\n");
if (ep->perst_asserted)
return;
ep->perst_asserted = true;
cancel_delayed_work_sync(&ep->link_training);
if (ep->link_up) {
pci_epc_linkdown(ep->epc);
ep->link_up = false;
}
}
static void rockchip_pcie_ep_perst_deassert(struct rockchip_pcie_ep *ep)
{
struct rockchip_pcie *rockchip = &ep->rockchip;
dev_dbg(rockchip->dev, "PERST# de-asserted, starting link training\n");
if (!ep->perst_asserted)
return;
ep->perst_asserted = false;
/* Enable link re-training */
rockchip_pcie_ep_retrain_link(rockchip);
/* Start link training */
schedule_delayed_work(&ep->link_training, 0);
}
static irqreturn_t rockchip_pcie_ep_perst_irq_thread(int irq, void *data)
{
struct pci_epc *epc = data;
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
u32 perst = gpiod_get_value(rockchip->perst_gpio);
if (perst)
rockchip_pcie_ep_perst_assert(ep);
else
rockchip_pcie_ep_perst_deassert(ep);
irq_set_irq_type(ep->perst_irq,
(perst ? IRQF_TRIGGER_HIGH : IRQF_TRIGGER_LOW));
return IRQ_HANDLED;
}
static int rockchip_pcie_ep_setup_irq(struct pci_epc *epc)
{
struct rockchip_pcie_ep *ep = epc_get_drvdata(epc);
struct rockchip_pcie *rockchip = &ep->rockchip;
struct device *dev = rockchip->dev;
int ret;
if (!rockchip->perst_gpio)
return 0;
/* PCIe reset interrupt */
ep->perst_irq = gpiod_to_irq(rockchip->perst_gpio);
if (ep->perst_irq < 0) {
dev_err(dev,
"failed to get IRQ for PERST# GPIO: %d\n",
ep->perst_irq);
return ep->perst_irq;
}
/*
* The perst_gpio is active low, so when it is inactive on start, it
* is high and will trigger the perst_irq handler. So treat this initial
* IRQ as a dummy one by faking the host asserting PERST#.
*/
ep->perst_asserted = true;
irq_set_status_flags(ep->perst_irq, IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(dev, ep->perst_irq, NULL,
rockchip_pcie_ep_perst_irq_thread,
IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
"pcie-ep-perst", epc);
if (ret) {
dev_err(dev,
"failed to request IRQ for PERST# GPIO: %d\n",
ret);
return ret;
}
return 0;
}
static const struct pci_epc_features rockchip_pcie_epc_features = {
.linkup_notifier = false,
.linkup_notifier = true,
.msi_capable = true,
.msix_capable = false,
.align = 256,
.align = ROCKCHIP_PCIE_AT_SIZE_ALIGN,
};
static const struct pci_epc_features*
@ -452,17 +703,19 @@ static const struct pci_epc_ops rockchip_pcie_epc_ops = {
.write_header = rockchip_pcie_ep_write_header,
.set_bar = rockchip_pcie_ep_set_bar,
.clear_bar = rockchip_pcie_ep_clear_bar,
.align_addr = rockchip_pcie_ep_align_addr,
.map_addr = rockchip_pcie_ep_map_addr,
.unmap_addr = rockchip_pcie_ep_unmap_addr,
.set_msi = rockchip_pcie_ep_set_msi,
.get_msi = rockchip_pcie_ep_get_msi,
.raise_irq = rockchip_pcie_ep_raise_irq,
.start = rockchip_pcie_ep_start,
.stop = rockchip_pcie_ep_stop,
.get_features = rockchip_pcie_ep_get_features,
};
static int rockchip_pcie_parse_ep_dt(struct rockchip_pcie *rockchip,
struct rockchip_pcie_ep *ep)
static int rockchip_pcie_ep_get_resources(struct rockchip_pcie *rockchip,
struct rockchip_pcie_ep *ep)
{
struct device *dev = rockchip->dev;
int err;
@ -496,91 +749,63 @@ static const struct of_device_id rockchip_pcie_ep_of_match[] = {
{},
};
static int rockchip_pcie_ep_probe(struct platform_device *pdev)
static int rockchip_pcie_ep_init_ob_mem(struct rockchip_pcie_ep *ep)
{
struct device *dev = &pdev->dev;
struct rockchip_pcie_ep *ep;
struct rockchip_pcie *rockchip;
struct pci_epc *epc;
size_t max_regions;
struct rockchip_pcie *rockchip = &ep->rockchip;
struct device *dev = rockchip->dev;
struct pci_epc_mem_window *windows = NULL;
int err, i;
u32 cfg_msi, cfg_msix_cp;
ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
if (!ep)
return -ENOMEM;
rockchip = &ep->rockchip;
rockchip->is_rc = false;
rockchip->dev = dev;
epc = devm_pci_epc_create(dev, &rockchip_pcie_epc_ops);
if (IS_ERR(epc)) {
dev_err(dev, "failed to create epc device\n");
return PTR_ERR(epc);
}
ep->epc = epc;
epc_set_drvdata(epc, ep);
err = rockchip_pcie_parse_ep_dt(rockchip, ep);
if (err)
return err;
err = rockchip_pcie_enable_clocks(rockchip);
if (err)
return err;
err = rockchip_pcie_init_port(rockchip);
if (err)
goto err_disable_clocks;
/* Establish the link automatically */
rockchip_pcie_write(rockchip, PCIE_CLIENT_LINK_TRAIN_ENABLE,
PCIE_CLIENT_CONFIG);
max_regions = ep->max_regions;
ep->ob_addr = devm_kcalloc(dev, max_regions, sizeof(*ep->ob_addr),
ep->ob_addr = devm_kcalloc(dev, ep->max_regions, sizeof(*ep->ob_addr),
GFP_KERNEL);
if (!ep->ob_addr) {
err = -ENOMEM;
goto err_uninit_port;
}
/* Only enable function 0 by default */
rockchip_pcie_write(rockchip, BIT(0), PCIE_CORE_PHY_FUNC_CFG);
if (!ep->ob_addr)
return -ENOMEM;
windows = devm_kcalloc(dev, ep->max_regions,
sizeof(struct pci_epc_mem_window), GFP_KERNEL);
if (!windows) {
err = -ENOMEM;
goto err_uninit_port;
}
if (!windows)
return -ENOMEM;
for (i = 0; i < ep->max_regions; i++) {
windows[i].phys_base = rockchip->mem_res->start + (SZ_1M * i);
windows[i].size = SZ_1M;
windows[i].page_size = SZ_1M;
}
err = pci_epc_multi_mem_init(epc, windows, ep->max_regions);
err = pci_epc_multi_mem_init(ep->epc, windows, ep->max_regions);
devm_kfree(dev, windows);
if (err < 0) {
dev_err(dev, "failed to initialize the memory space\n");
goto err_uninit_port;
return err;
}
ep->irq_cpu_addr = pci_epc_mem_alloc_addr(epc, &ep->irq_phys_addr,
ep->irq_cpu_addr = pci_epc_mem_alloc_addr(ep->epc, &ep->irq_phys_addr,
SZ_1M);
if (!ep->irq_cpu_addr) {
dev_err(dev, "failed to reserve memory space for MSI\n");
err = -ENOMEM;
goto err_epc_mem_exit;
}
ep->irq_pci_addr = ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR;
return 0;
err_epc_mem_exit:
pci_epc_mem_exit(ep->epc);
return err;
}
static void rockchip_pcie_ep_exit_ob_mem(struct rockchip_pcie_ep *ep)
{
pci_epc_mem_exit(ep->epc);
}
static void rockchip_pcie_ep_hide_broken_msix_cap(struct rockchip_pcie *rockchip)
{
u32 cfg_msi, cfg_msix_cp;
/*
* MSI-X is not supported but the controller still advertises the MSI-X
* capability by default, which can lead to the Root Complex side
@ -603,19 +828,68 @@ static int rockchip_pcie_ep_probe(struct platform_device *pdev)
rockchip_pcie_write(rockchip, cfg_msi,
PCIE_EP_CONFIG_BASE + ROCKCHIP_PCIE_EP_MSI_CTRL_REG);
}
rockchip_pcie_write(rockchip, PCIE_CLIENT_CONF_ENABLE,
PCIE_CLIENT_CONFIG);
static int rockchip_pcie_ep_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct rockchip_pcie_ep *ep;
struct rockchip_pcie *rockchip;
struct pci_epc *epc;
int err;
ep = devm_kzalloc(dev, sizeof(*ep), GFP_KERNEL);
if (!ep)
return -ENOMEM;
rockchip = &ep->rockchip;
rockchip->is_rc = false;
rockchip->dev = dev;
INIT_DELAYED_WORK(&ep->link_training, rockchip_pcie_ep_link_training);
epc = devm_pci_epc_create(dev, &rockchip_pcie_epc_ops);
if (IS_ERR(epc)) {
dev_err(dev, "failed to create EPC device\n");
return PTR_ERR(epc);
}
ep->epc = epc;
epc_set_drvdata(epc, ep);
err = rockchip_pcie_ep_get_resources(rockchip, ep);
if (err)
return err;
err = rockchip_pcie_ep_init_ob_mem(ep);
if (err)
return err;
err = rockchip_pcie_enable_clocks(rockchip);
if (err)
goto err_exit_ob_mem;
err = rockchip_pcie_init_port(rockchip);
if (err)
goto err_disable_clocks;
rockchip_pcie_ep_hide_broken_msix_cap(rockchip);
/* Only enable function 0 by default */
rockchip_pcie_write(rockchip, BIT(0), PCIE_CORE_PHY_FUNC_CFG);
pci_epc_init_notify(epc);
err = rockchip_pcie_ep_setup_irq(epc);
if (err < 0)
goto err_uninit_port;
return 0;
err_epc_mem_exit:
pci_epc_mem_exit(epc);
err_uninit_port:
rockchip_pcie_deinit_phys(rockchip);
err_disable_clocks:
rockchip_pcie_disable_clocks(rockchip);
err_exit_ob_mem:
rockchip_pcie_ep_exit_ob_mem(ep);
return err;
}

View File

@ -294,7 +294,7 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
int err, i = MAX_LANE_NUM;
u32 status;
gpiod_set_value_cansleep(rockchip->ep_gpio, 0);
gpiod_set_value_cansleep(rockchip->perst_gpio, 0);
err = rockchip_pcie_init_port(rockchip);
if (err)
@ -323,7 +323,7 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
PCIE_CLIENT_CONFIG);
msleep(PCIE_T_PVPERL_MS);
gpiod_set_value_cansleep(rockchip->ep_gpio, 1);
gpiod_set_value_cansleep(rockchip->perst_gpio, 1);
msleep(PCIE_T_RRS_READY_MS);
@ -1050,7 +1050,7 @@ static struct platform_driver rockchip_pcie_driver = {
.pm = &rockchip_pcie_pm_ops,
},
.probe = rockchip_pcie_probe,
.remove_new = rockchip_pcie_remove,
.remove = rockchip_pcie_remove,
};
module_platform_driver(rockchip_pcie_driver);

View File

@ -119,13 +119,15 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip)
return PTR_ERR(rockchip->aclk_rst);
}
if (rockchip->is_rc) {
rockchip->ep_gpio = devm_gpiod_get_optional(dev, "ep",
GPIOD_OUT_LOW);
if (IS_ERR(rockchip->ep_gpio))
return dev_err_probe(dev, PTR_ERR(rockchip->ep_gpio),
"failed to get ep GPIO\n");
}
if (rockchip->is_rc)
rockchip->perst_gpio = devm_gpiod_get_optional(dev, "ep",
GPIOD_OUT_LOW);
else
rockchip->perst_gpio = devm_gpiod_get_optional(dev, "reset",
GPIOD_IN);
if (IS_ERR(rockchip->perst_gpio))
return dev_err_probe(dev, PTR_ERR(rockchip->perst_gpio),
"failed to get PERST# GPIO\n");
rockchip->aclk_pcie = devm_clk_get(dev, "aclk");
if (IS_ERR(rockchip->aclk_pcie)) {
@ -244,11 +246,12 @@ int rockchip_pcie_init_port(struct rockchip_pcie *rockchip)
rockchip_pcie_write(rockchip, PCIE_CLIENT_GEN_SEL_1,
PCIE_CLIENT_CONFIG);
regs = PCIE_CLIENT_LINK_TRAIN_ENABLE | PCIE_CLIENT_ARI_ENABLE |
regs = PCIE_CLIENT_ARI_ENABLE |
PCIE_CLIENT_CONF_LANE_NUM(rockchip->lanes);
if (rockchip->is_rc)
regs |= PCIE_CLIENT_CONF_ENABLE | PCIE_CLIENT_MODE_RC;
regs |= PCIE_CLIENT_LINK_TRAIN_ENABLE |
PCIE_CLIENT_CONF_ENABLE | PCIE_CLIENT_MODE_RC;
else
regs |= PCIE_CLIENT_CONF_DISABLE | PCIE_CLIENT_MODE_EP;

View File

@ -26,12 +26,14 @@
#define MAX_LANE_NUM 4
#define MAX_REGION_LIMIT 32
#define MIN_EP_APERTURE 28
#define LINK_TRAIN_TIMEOUT (500 * USEC_PER_MSEC)
#define PCIE_CLIENT_BASE 0x0
#define PCIE_CLIENT_CONFIG (PCIE_CLIENT_BASE + 0x00)
#define PCIE_CLIENT_CONF_ENABLE HIWORD_UPDATE_BIT(0x0001)
#define PCIE_CLIENT_CONF_DISABLE HIWORD_UPDATE(0x0001, 0)
#define PCIE_CLIENT_LINK_TRAIN_ENABLE HIWORD_UPDATE_BIT(0x0002)
#define PCIE_CLIENT_LINK_TRAIN_DISABLE HIWORD_UPDATE(0x0002, 0)
#define PCIE_CLIENT_ARI_ENABLE HIWORD_UPDATE_BIT(0x0008)
#define PCIE_CLIENT_CONF_LANE_NUM(x) HIWORD_UPDATE(0x0030, ENCODE_LANES(x))
#define PCIE_CLIENT_MODE_RC HIWORD_UPDATE_BIT(0x0040)
@ -49,6 +51,10 @@
#define PCIE_CLIENT_DEBUG_LTSSM_MASK GENMASK(5, 0)
#define PCIE_CLIENT_DEBUG_LTSSM_L1 0x18
#define PCIE_CLIENT_DEBUG_LTSSM_L2 0x19
#define PCIE_CLIENT_BASIC_STATUS0 (PCIE_CLIENT_BASE + 0x44)
#define PCIE_CLIENT_NEG_LINK_WIDTH_MASK GENMASK(7, 6)
#define PCIE_CLIENT_NEG_LINK_WIDTH_SHIFT 6
#define PCIE_CLIENT_NEG_LINK_SPEED BIT(5)
#define PCIE_CLIENT_BASIC_STATUS1 (PCIE_CLIENT_BASE + 0x48)
#define PCIE_CLIENT_LINK_STATUS_UP 0x00300000
#define PCIE_CLIENT_LINK_STATUS_MASK 0x00300000
@ -86,6 +92,8 @@
#define PCIE_CORE_CTRL_MGMT_BASE 0x900000
#define PCIE_CORE_CTRL (PCIE_CORE_CTRL_MGMT_BASE + 0x000)
#define PCIE_CORE_PL_CONF_LS_MASK 0x00000001
#define PCIE_CORE_PL_CONF_LS_READY 0x00000001
#define PCIE_CORE_PL_CONF_SPEED_5G 0x00000008
#define PCIE_CORE_PL_CONF_SPEED_MASK 0x00000018
#define PCIE_CORE_PL_CONF_LANE_MASK 0x00000006
@ -143,6 +151,7 @@
#define PCIE_RC_CONFIG_BASE 0xa00000
#define PCIE_EP_CONFIG_BASE 0xa00000
#define PCIE_EP_CONFIG_DID_VID (PCIE_EP_CONFIG_BASE + 0x00)
#define PCIE_EP_CONFIG_LCS (PCIE_EP_CONFIG_BASE + 0xd0)
#define PCIE_RC_CONFIG_RID_CCR (PCIE_RC_CONFIG_BASE + 0x08)
#define PCIE_RC_CONFIG_DCR (PCIE_RC_CONFIG_BASE + 0xc4)
#define PCIE_RC_CONFIG_DCR_CSPL_SHIFT 18
@ -154,6 +163,7 @@
#define PCIE_RC_CONFIG_LINK_CAP (PCIE_RC_CONFIG_BASE + 0xcc)
#define PCIE_RC_CONFIG_LINK_CAP_L0S BIT(10)
#define PCIE_RC_CONFIG_LCS (PCIE_RC_CONFIG_BASE + 0xd0)
#define PCIE_EP_CONFIG_LCS (PCIE_EP_CONFIG_BASE + 0xd0)
#define PCIE_RC_CONFIG_L1_SUBSTATE_CTRL2 (PCIE_RC_CONFIG_BASE + 0x90c)
#define PCIE_RC_CONFIG_THP_CAP (PCIE_RC_CONFIG_BASE + 0x274)
#define PCIE_RC_CONFIG_THP_CAP_NEXT_MASK GENMASK(31, 20)
@ -191,6 +201,8 @@
#define ROCKCHIP_VENDOR_ID 0x1d87
#define PCIE_LINK_IS_L2(x) \
(((x) & PCIE_CLIENT_DEBUG_LTSSM_MASK) == PCIE_CLIENT_DEBUG_LTSSM_L2)
#define PCIE_LINK_TRAINING_DONE(x) \
(((x) & PCIE_CORE_PL_CONF_LS_MASK) == PCIE_CORE_PL_CONF_LS_READY)
#define PCIE_LINK_UP(x) \
(((x) & PCIE_CLIENT_LINK_STATUS_MASK) == PCIE_CLIENT_LINK_STATUS_UP)
#define PCIE_LINK_IS_GEN2(x) \
@ -241,10 +253,20 @@
#define ROCKCHIP_PCIE_EP_MSIX_CAP_CP_MASK GENMASK(15, 8)
#define ROCKCHIP_PCIE_EP_DUMMY_IRQ_ADDR 0x1
#define ROCKCHIP_PCIE_EP_PCI_LEGACY_IRQ_ADDR 0x3
#define ROCKCHIP_PCIE_AT_MIN_NUM_BITS 8
#define ROCKCHIP_PCIE_AT_MAX_NUM_BITS 20
#define ROCKCHIP_PCIE_AT_SIZE_ALIGN (1UL << ROCKCHIP_PCIE_AT_MIN_NUM_BITS)
#define ROCKCHIP_PCIE_EP_FUNC_BASE(fn) \
(PCIE_EP_PF_CONFIG_REGS_BASE + (((fn) << 12) & GENMASK(19, 12)))
#define ROCKCHIP_PCIE_EP_VIRT_FUNC_BASE(fn) \
(PCIE_EP_PF_CONFIG_REGS_BASE + 0x10000 + (((fn) << 12) & GENMASK(19, 12)))
#define ROCKCHIP_PCIE_AT_MIN_NUM_BITS 8
#define ROCKCHIP_PCIE_AT_MAX_NUM_BITS 20
#define ROCKCHIP_PCIE_AT_SIZE_ALIGN (1UL << ROCKCHIP_PCIE_AT_MIN_NUM_BITS)
#define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
(PCIE_CORE_AXI_CONF_BASE + 0x0828 + (fn) * 0x0040 + (bar) * 0x0008)
#define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
@ -307,7 +329,7 @@ struct rockchip_pcie {
struct regulator *vpcie3v3; /* 3.3V power supply */
struct regulator *vpcie1v8; /* 1.8V power supply */
struct regulator *vpcie0v9; /* 0.9V power supply */
struct gpio_desc *ep_gpio;
struct gpio_desc *perst_gpio;
u32 lanes;
u8 lanes_map;
int link_gen;

View File

@ -916,6 +916,6 @@ static struct platform_driver nwl_pcie_driver = {
.of_match_table = nwl_pcie_of_match,
},
.probe = nwl_pcie_probe,
.remove_new = nwl_pcie_remove,
.remove = nwl_pcie_remove,
};
builtin_platform_driver(nwl_pcie_driver);

View File

@ -25,9 +25,6 @@
#define MC_PCIE1_BRIDGE_ADDR 0x00008000u
#define MC_PCIE1_CTRL_ADDR 0x0000a000u
#define MC_PCIE_BRIDGE_ADDR (MC_PCIE1_BRIDGE_ADDR)
#define MC_PCIE_CTRL_ADDR (MC_PCIE1_CTRL_ADDR)
/* PCIe Controller Phy Regs */
#define SEC_ERROR_EVENT_CNT 0x20
#define DED_ERROR_EVENT_CNT 0x24
@ -128,7 +125,6 @@
[EVENT_LOCAL_ ## x] = { __stringify(x), s }
#define PCIE_EVENT(x) \
.base = MC_PCIE_CTRL_ADDR, \
.offset = PCIE_EVENT_INT, \
.mask_offset = PCIE_EVENT_INT, \
.mask_high = 1, \
@ -136,7 +132,6 @@
.enb_mask = PCIE_EVENT_INT_ENB_MASK
#define SEC_EVENT(x) \
.base = MC_PCIE_CTRL_ADDR, \
.offset = SEC_ERROR_INT, \
.mask_offset = SEC_ERROR_INT_MASK, \
.mask = SEC_ERROR_INT_ ## x ## _INT, \
@ -144,7 +139,6 @@
.enb_mask = 0
#define DED_EVENT(x) \
.base = MC_PCIE_CTRL_ADDR, \
.offset = DED_ERROR_INT, \
.mask_offset = DED_ERROR_INT_MASK, \
.mask_high = 1, \
@ -152,7 +146,6 @@
.enb_mask = 0
#define LOCAL_EVENT(x) \
.base = MC_PCIE_BRIDGE_ADDR, \
.offset = ISTATUS_LOCAL, \
.mask_offset = IMASK_LOCAL, \
.mask_high = 0, \
@ -179,7 +172,8 @@ struct event_map {
struct mc_pcie {
struct plda_pcie_rp plda;
void __iomem *axi_base_addr;
void __iomem *bridge_base_addr;
void __iomem *ctrl_base_addr;
};
struct cause {
@ -253,7 +247,6 @@ static struct event_map local_status_to_event[] = {
};
static struct {
u32 base;
u32 offset;
u32 mask;
u32 shift;
@ -325,8 +318,7 @@ static inline u32 reg_to_event(u32 reg, struct event_map field)
static u32 pcie_events(struct mc_pcie *port)
{
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
u32 reg = readl_relaxed(ctrl_base_addr + PCIE_EVENT_INT);
u32 reg = readl_relaxed(port->ctrl_base_addr + PCIE_EVENT_INT);
u32 val = 0;
int i;
@ -338,8 +330,7 @@ static u32 pcie_events(struct mc_pcie *port)
static u32 sec_errors(struct mc_pcie *port)
{
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
u32 reg = readl_relaxed(ctrl_base_addr + SEC_ERROR_INT);
u32 reg = readl_relaxed(port->ctrl_base_addr + SEC_ERROR_INT);
u32 val = 0;
int i;
@ -351,8 +342,7 @@ static u32 sec_errors(struct mc_pcie *port)
static u32 ded_errors(struct mc_pcie *port)
{
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
u32 reg = readl_relaxed(ctrl_base_addr + DED_ERROR_INT);
u32 reg = readl_relaxed(port->ctrl_base_addr + DED_ERROR_INT);
u32 val = 0;
int i;
@ -364,8 +354,7 @@ static u32 ded_errors(struct mc_pcie *port)
static u32 local_events(struct mc_pcie *port)
{
void __iomem *bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
u32 reg = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
u32 reg = readl_relaxed(port->bridge_base_addr + ISTATUS_LOCAL);
u32 val = 0;
int i;
@ -412,8 +401,12 @@ static void mc_ack_event_irq(struct irq_data *data)
void __iomem *addr;
u32 mask;
addr = mc_port->axi_base_addr + event_descs[event].base +
event_descs[event].offset;
if (event_descs[event].offset == ISTATUS_LOCAL)
addr = mc_port->bridge_base_addr;
else
addr = mc_port->ctrl_base_addr;
addr += event_descs[event].offset;
mask = event_descs[event].mask;
mask |= event_descs[event].enb_mask;
@ -429,8 +422,12 @@ static void mc_mask_event_irq(struct irq_data *data)
u32 mask;
u32 val;
addr = mc_port->axi_base_addr + event_descs[event].base +
event_descs[event].mask_offset;
if (event_descs[event].offset == ISTATUS_LOCAL)
addr = mc_port->bridge_base_addr;
else
addr = mc_port->ctrl_base_addr;
addr += event_descs[event].mask_offset;
mask = event_descs[event].mask;
if (event_descs[event].enb_mask) {
mask <<= PCIE_EVENT_INT_ENB_SHIFT;
@ -460,8 +457,12 @@ static void mc_unmask_event_irq(struct irq_data *data)
u32 mask;
u32 val;
addr = mc_port->axi_base_addr + event_descs[event].base +
event_descs[event].mask_offset;
if (event_descs[event].offset == ISTATUS_LOCAL)
addr = mc_port->bridge_base_addr;
else
addr = mc_port->ctrl_base_addr;
addr += event_descs[event].mask_offset;
mask = event_descs[event].mask;
if (event_descs[event].enb_mask)
@ -554,26 +555,20 @@ static const struct plda_event mc_event = {
static inline void mc_clear_secs(struct mc_pcie *port)
{
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT, ctrl_base_addr +
SEC_ERROR_INT);
writel_relaxed(0, ctrl_base_addr + SEC_ERROR_EVENT_CNT);
writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT,
port->ctrl_base_addr + SEC_ERROR_INT);
writel_relaxed(0, port->ctrl_base_addr + SEC_ERROR_EVENT_CNT);
}
static inline void mc_clear_deds(struct mc_pcie *port)
{
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT, ctrl_base_addr +
DED_ERROR_INT);
writel_relaxed(0, ctrl_base_addr + DED_ERROR_EVENT_CNT);
writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT,
port->ctrl_base_addr + DED_ERROR_INT);
writel_relaxed(0, port->ctrl_base_addr + DED_ERROR_EVENT_CNT);
}
static void mc_disable_interrupts(struct mc_pcie *port)
{
void __iomem *bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
u32 val;
/* Ensure ECC bypass is enabled */
@ -581,22 +576,22 @@ static void mc_disable_interrupts(struct mc_pcie *port)
ECC_CONTROL_RX_RAM_ECC_BYPASS |
ECC_CONTROL_PCIE2AXI_RAM_ECC_BYPASS |
ECC_CONTROL_AXI2PCIE_RAM_ECC_BYPASS;
writel_relaxed(val, ctrl_base_addr + ECC_CONTROL);
writel_relaxed(val, port->ctrl_base_addr + ECC_CONTROL);
/* Disable SEC errors and clear any outstanding */
writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT, ctrl_base_addr +
SEC_ERROR_INT_MASK);
writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT,
port->ctrl_base_addr + SEC_ERROR_INT_MASK);
mc_clear_secs(port);
/* Disable DED errors and clear any outstanding */
writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT, ctrl_base_addr +
DED_ERROR_INT_MASK);
writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT,
port->ctrl_base_addr + DED_ERROR_INT_MASK);
mc_clear_deds(port);
/* Disable local interrupts and clear any outstanding */
writel_relaxed(0, bridge_base_addr + IMASK_LOCAL);
writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_LOCAL);
writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_MSI);
writel_relaxed(0, port->bridge_base_addr + IMASK_LOCAL);
writel_relaxed(GENMASK(31, 0), port->bridge_base_addr + ISTATUS_LOCAL);
writel_relaxed(GENMASK(31, 0), port->bridge_base_addr + ISTATUS_MSI);
/* Disable PCIe events and clear any outstanding */
val = PCIE_EVENT_INT_L2_EXIT_INT |
@ -605,11 +600,11 @@ static void mc_disable_interrupts(struct mc_pcie *port)
PCIE_EVENT_INT_L2_EXIT_INT_MASK |
PCIE_EVENT_INT_HOTRST_EXIT_INT_MASK |
PCIE_EVENT_INT_DLUP_EXIT_INT_MASK;
writel_relaxed(val, ctrl_base_addr + PCIE_EVENT_INT);
writel_relaxed(val, port->ctrl_base_addr + PCIE_EVENT_INT);
/* Disable host interrupts and clear any outstanding */
writel_relaxed(0, bridge_base_addr + IMASK_HOST);
writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST);
writel_relaxed(0, port->bridge_base_addr + IMASK_HOST);
writel_relaxed(GENMASK(31, 0), port->bridge_base_addr + ISTATUS_HOST);
}
static int mc_platform_init(struct pci_config_window *cfg)
@ -617,12 +612,10 @@ static int mc_platform_init(struct pci_config_window *cfg)
struct device *dev = cfg->parent;
struct platform_device *pdev = to_platform_device(dev);
struct pci_host_bridge *bridge = platform_get_drvdata(pdev);
void __iomem *bridge_base_addr =
port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
int ret;
/* Configure address translation table 0 for PCIe config space */
plda_pcie_setup_window(bridge_base_addr, 0, cfg->res.start,
plda_pcie_setup_window(port->bridge_base_addr, 0, cfg->res.start,
cfg->res.start,
resource_size(&cfg->res));
@ -649,7 +642,7 @@ static int mc_platform_init(struct pci_config_window *cfg)
static int mc_host_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
void __iomem *bridge_base_addr;
void __iomem *apb_base_addr;
struct plda_pcie_rp *plda;
int ret;
u32 val;
@ -661,30 +654,45 @@ static int mc_host_probe(struct platform_device *pdev)
plda = &port->plda;
plda->dev = dev;
port->axi_base_addr = devm_platform_ioremap_resource(pdev, 1);
if (IS_ERR(port->axi_base_addr))
return PTR_ERR(port->axi_base_addr);
port->bridge_base_addr = devm_platform_ioremap_resource_byname(pdev,
"bridge");
port->ctrl_base_addr = devm_platform_ioremap_resource_byname(pdev,
"ctrl");
if (!IS_ERR(port->bridge_base_addr) && !IS_ERR(port->ctrl_base_addr))
goto addrs_set;
/*
* The original, incorrect, binding that lumped the control and
* bridge addresses together still needs to be handled by the driver.
*/
apb_base_addr = devm_platform_ioremap_resource_byname(pdev, "apb");
if (IS_ERR(apb_base_addr))
return dev_err_probe(dev, PTR_ERR(apb_base_addr),
"both legacy apb register and ctrl/bridge regions missing");
port->bridge_base_addr = apb_base_addr + MC_PCIE1_BRIDGE_ADDR;
port->ctrl_base_addr = apb_base_addr + MC_PCIE1_CTRL_ADDR;
addrs_set:
mc_disable_interrupts(port);
bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
plda->bridge_addr = bridge_base_addr;
plda->bridge_addr = port->bridge_base_addr;
plda->num_events = NUM_EVENTS;
/* Allow enabling MSI by disabling MSI-X */
val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0);
val = readl(port->bridge_base_addr + PCIE_PCI_IRQ_DW0);
val &= ~MSIX_CAP_MASK;
writel(val, bridge_base_addr + PCIE_PCI_IRQ_DW0);
writel(val, port->bridge_base_addr + PCIE_PCI_IRQ_DW0);
/* Pick num vectors from bitfile programmed onto FPGA fabric */
val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0);
val = readl(port->bridge_base_addr + PCIE_PCI_IRQ_DW0);
val &= NUM_MSI_MSGS_MASK;
val >>= NUM_MSI_MSGS_SHIFT;
plda->msi.num_vectors = 1 << val;
/* Pick vector address from design */
plda->msi.vector_phy = readl_relaxed(bridge_base_addr + IMSI_ADDR);
plda->msi.vector_phy = readl_relaxed(port->bridge_base_addr + IMSI_ADDR);
ret = mc_pcie_init_clks(dev);
if (ret) {

View File

@ -404,6 +404,9 @@ static int starfive_pcie_probe(struct platform_device *pdev)
if (ret)
return ret;
pm_runtime_enable(&pdev->dev);
pm_runtime_get_sync(&pdev->dev);
plda->host_ops = &sf_host_ops;
plda->num_events = PLDA_MAX_EVENT_NUM;
/* mask doorbell event */
@ -413,11 +416,12 @@ static int starfive_pcie_probe(struct platform_device *pdev)
plda->events_bitmap <<= PLDA_NUM_DMA_EVENTS;
ret = plda_pcie_host_init(&pcie->plda, &starfive_pcie_ops,
&stf_pcie_event);
if (ret)
if (ret) {
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
return ret;
}
pm_runtime_enable(&pdev->dev);
pm_runtime_get_sync(&pdev->dev);
platform_set_drvdata(pdev, pcie);
return 0;
@ -480,7 +484,7 @@ static struct platform_driver starfive_pcie_driver = {
.pm = pm_sleep_ptr(&starfive_pcie_pm_ops),
},
.probe = starfive_pcie_probe,
.remove_new = starfive_pcie_remove,
.remove = starfive_pcie_remove,
};
module_platform_driver(starfive_pcie_driver);

View File

@ -740,11 +740,9 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
if (!(features & VMD_FEAT_BIOS_PM_QUIRK))
return 0;
pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_LTR);
if (!pos)
return 0;
goto out_state_change;
/*
* Skip if the max snoop LTR is non-zero, indicating BIOS has set it
@ -752,7 +750,7 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
*/
pci_read_config_dword(pdev, pos + PCI_LTR_MAX_SNOOP_LAT, &ltr_reg);
if (!!(ltr_reg & (PCI_LTR_VALUE_MASK | PCI_LTR_SCALE_MASK)))
return 0;
goto out_state_change;
/*
* Set the default values to the maximum required by the platform to
@ -764,6 +762,13 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
pci_write_config_dword(pdev, pos + PCI_LTR_MAX_SNOOP_LAT, ltr_reg);
pci_info(pdev, "VMD: Default LTR value set by driver\n");
out_state_change:
/*
* Ensure devices are in D0 before enabling PCI-PM L1 PM Substates, per
* PCIe r6.0, sec 5.5.4.
*/
pci_set_power_state_locked(pdev, PCI_D0);
pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
return 0;
}
@ -1100,6 +1105,10 @@ static const struct pci_device_id vmd_ids[] = {
.driver_data = VMD_FEATS_CLIENT,},
{PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B),
.driver_data = VMD_FEATS_CLIENT,},
{PCI_VDEVICE(INTEL, 0xb60b),
.driver_data = VMD_FEATS_CLIENT,},
{PCI_VDEVICE(INTEL, 0xb06f),
.driver_data = VMD_FEATS_CLIENT,},
{0,}
};
MODULE_DEVICE_TABLE(pci, vmd_ids);

View File

@ -773,7 +773,7 @@ EXPORT_SYMBOL(pcim_iomap_region);
* Unmap a BAR and release its region manually. Only pass BARs that were
* previously mapped by pcim_iomap_region().
*/
static void pcim_iounmap_region(struct pci_dev *pdev, int bar)
void pcim_iounmap_region(struct pci_dev *pdev, int bar)
{
struct pcim_addr_devres res_searched;
@ -784,6 +784,7 @@ static void pcim_iounmap_region(struct pci_dev *pdev, int bar)
devres_release(&pdev->dev, pcim_addr_resource_release,
pcim_addr_resources_match, &res_searched);
}
EXPORT_SYMBOL(pcim_iounmap_region);
/**
* pcim_iomap_regions - Request and iomap PCI BARs (DEPRECATED)
@ -939,7 +940,7 @@ static void pcim_release_all_regions(struct pci_dev *pdev)
* desired, release individual regions with pcim_release_region() or all of
* them at once with pcim_release_all_regions().
*/
static int pcim_request_all_regions(struct pci_dev *pdev, const char *name)
int pcim_request_all_regions(struct pci_dev *pdev, const char *name)
{
int ret;
int bar;
@ -957,69 +958,17 @@ static int pcim_request_all_regions(struct pci_dev *pdev, const char *name)
return ret;
}
EXPORT_SYMBOL(pcim_request_all_regions);
/**
* pcim_iomap_regions_request_all - Request all BARs and iomap specified ones
* (DEPRECATED)
* @pdev: PCI device to map IO resources for
* @mask: Mask of BARs to iomap
* @name: Name associated with the requests
*
* Returns: 0 on success, negative error code on failure.
*
* Request all PCI BARs and iomap regions specified by @mask.
*
* To release these resources manually, call pcim_release_region() for the
* regions and pcim_iounmap() for the mappings.
*
* This function is DEPRECATED. Don't use it in new code. Instead, use one
* of the pcim_* region request functions in combination with a pcim_*
* mapping function.
*/
int pcim_iomap_regions_request_all(struct pci_dev *pdev, int mask,
const char *name)
{
int bar;
int ret;
void __iomem **legacy_iomap_table;
ret = pcim_request_all_regions(pdev, name);
if (ret != 0)
return ret;
for (bar = 0; bar < PCI_STD_NUM_BARS; bar++) {
if (!mask_contains_bar(mask, bar))
continue;
if (!pcim_iomap(pdev, bar, 0))
goto err;
}
return 0;
err:
/*
* If bar is larger than 0, then pcim_iomap() above has most likely
* failed because of -EINVAL. If it is equal 0, most likely the table
* couldn't be created, indicating -ENOMEM.
*/
ret = bar > 0 ? -EINVAL : -ENOMEM;
legacy_iomap_table = (void __iomem **)pcim_iomap_table(pdev);
while (--bar >= 0)
pcim_iounmap(pdev, legacy_iomap_table[bar]);
pcim_release_all_regions(pdev);
return ret;
}
EXPORT_SYMBOL(pcim_iomap_regions_request_all);
/**
* pcim_iounmap_regions - Unmap and release PCI BARs
* pcim_iounmap_regions - Unmap and release PCI BARs (DEPRECATED)
* @pdev: PCI device to map IO resources for
* @mask: Mask of BARs to unmap and release
*
* Unmap and release regions specified by @mask.
*
* This function is DEPRECATED. Do not use it in new code.
* Use pcim_iounmap_region() instead.
*/
void pcim_iounmap_regions(struct pci_dev *pdev, int mask)
{

View File

@ -146,6 +146,7 @@ static int pci_doe_send_req(struct pci_doe_mb *doe_mb,
{
struct pci_dev *pdev = doe_mb->pdev;
int offset = doe_mb->cap_offset;
unsigned long timeout_jiffies;
size_t length, remainder;
u32 val;
int i;
@ -155,8 +156,19 @@ static int pci_doe_send_req(struct pci_doe_mb *doe_mb,
* someone other than Linux (e.g. firmware) is using the mailbox. Note
* it is expected that firmware and OS will negotiate access rights via
* an, as yet to be defined, method.
*
* Wait up to one PCI_DOE_TIMEOUT period to allow the prior command to
* finish. Otherwise, simply error out as unable to field the request.
*
* PCIe r6.2 sec 6.30.3 states no interrupt is raised when the DOE Busy
* bit is cleared, so polling here is our best option for the moment.
*/
pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val);
timeout_jiffies = jiffies + PCI_DOE_TIMEOUT;
do {
pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val);
} while (FIELD_GET(PCI_DOE_STATUS_BUSY, val) &&
!time_after(jiffies, timeout_jiffies));
if (FIELD_GET(PCI_DOE_STATUS_BUSY, val))
return -EBUSY;

View File

@ -55,7 +55,7 @@ struct pci_config_window *pci_ecam_create(struct device *dev,
bus_range_max = resource_size(cfgres) >> bus_shift;
if (bus_range > bus_range_max) {
bus_range = bus_range_max;
cfg->busr.end = busr->start + bus_range - 1;
resource_set_size(&cfg->busr, bus_range);
dev_warn(dev, "ECAM area %pR can only accommodate %pR (reduced from %pR desired)\n",
cfgres, &cfg->busr, busr);
}

View File

@ -867,12 +867,18 @@ static int pci_epf_mhi_bind(struct pci_epf *epf)
{
struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
struct platform_device *pdev = to_platform_device(epc->dev.parent);
struct resource *res;
int ret;
/* Get MMIO base address from Endpoint controller */
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mmio");
if (!res) {
dev_err(dev, "Failed to get \"mmio\" resource\n");
return -ENODEV;
}
epf_mhi->mmio_phys = res->start;
epf_mhi->mmio_size = resource_size(res);

View File

@ -291,8 +291,6 @@ static void pci_epf_test_clean_dma_chan(struct pci_epf_test *epf_test)
dma_release_channel(epf_test->dma_chan_rx);
epf_test->dma_chan_rx = NULL;
return;
}
static void pci_epf_test_print_rate(struct pci_epf_test *epf_test,
@ -317,91 +315,92 @@ static void pci_epf_test_print_rate(struct pci_epf_test *epf_test,
static void pci_epf_test_copy(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
int ret;
void __iomem *src_addr;
void __iomem *dst_addr;
phys_addr_t src_phys_addr;
phys_addr_t dst_phys_addr;
int ret = 0;
struct timespec64 start, end;
struct pci_epf *epf = epf_test->epf;
struct device *dev = &epf->dev;
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
struct pci_epc_map src_map, dst_map;
u64 src_addr = reg->src_addr;
u64 dst_addr = reg->dst_addr;
size_t copy_size = reg->size;
ssize_t map_size = 0;
void *copy_buf = NULL, *buf;
src_addr = pci_epc_mem_alloc_addr(epc, &src_phys_addr, reg->size);
if (!src_addr) {
dev_err(dev, "Failed to allocate source address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
ret = -ENOMEM;
goto err;
}
ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, src_phys_addr,
reg->src_addr, reg->size);
if (ret) {
dev_err(dev, "Failed to map source address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
goto err_src_addr;
}
dst_addr = pci_epc_mem_alloc_addr(epc, &dst_phys_addr, reg->size);
if (!dst_addr) {
dev_err(dev, "Failed to allocate destination address\n");
reg->status = STATUS_DST_ADDR_INVALID;
ret = -ENOMEM;
goto err_src_map_addr;
}
ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, dst_phys_addr,
reg->dst_addr, reg->size);
if (ret) {
dev_err(dev, "Failed to map destination address\n");
reg->status = STATUS_DST_ADDR_INVALID;
goto err_dst_addr;
}
ktime_get_ts64(&start);
if (reg->flags & FLAG_USE_DMA) {
if (epf_test->dma_private) {
dev_err(dev, "Cannot transfer data using DMA\n");
ret = -EINVAL;
goto err_map_addr;
goto set_status;
}
ret = pci_epf_test_data_transfer(epf_test, dst_phys_addr,
src_phys_addr, reg->size, 0,
DMA_MEM_TO_MEM);
if (ret)
dev_err(dev, "Data transfer failed\n");
} else {
void *buf;
buf = kzalloc(reg->size, GFP_KERNEL);
if (!buf) {
copy_buf = kzalloc(copy_size, GFP_KERNEL);
if (!copy_buf) {
ret = -ENOMEM;
goto err_map_addr;
goto set_status;
}
buf = copy_buf;
}
while (copy_size) {
ret = pci_epc_mem_map(epc, epf->func_no, epf->vfunc_no,
src_addr, copy_size, &src_map);
if (ret) {
dev_err(dev, "Failed to map source address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
goto free_buf;
}
memcpy_fromio(buf, src_addr, reg->size);
memcpy_toio(dst_addr, buf, reg->size);
kfree(buf);
ret = pci_epc_mem_map(epf->epc, epf->func_no, epf->vfunc_no,
dst_addr, copy_size, &dst_map);
if (ret) {
dev_err(dev, "Failed to map destination address\n");
reg->status = STATUS_DST_ADDR_INVALID;
pci_epc_mem_unmap(epc, epf->func_no, epf->vfunc_no,
&src_map);
goto free_buf;
}
map_size = min_t(size_t, dst_map.pci_size, src_map.pci_size);
ktime_get_ts64(&start);
if (reg->flags & FLAG_USE_DMA) {
ret = pci_epf_test_data_transfer(epf_test,
dst_map.phys_addr, src_map.phys_addr,
map_size, 0, DMA_MEM_TO_MEM);
if (ret) {
dev_err(dev, "Data transfer failed\n");
goto unmap;
}
} else {
memcpy_fromio(buf, src_map.virt_addr, map_size);
memcpy_toio(dst_map.virt_addr, buf, map_size);
buf += map_size;
}
ktime_get_ts64(&end);
copy_size -= map_size;
src_addr += map_size;
dst_addr += map_size;
pci_epc_mem_unmap(epc, epf->func_no, epf->vfunc_no, &dst_map);
pci_epc_mem_unmap(epc, epf->func_no, epf->vfunc_no, &src_map);
map_size = 0;
}
ktime_get_ts64(&end);
pci_epf_test_print_rate(epf_test, "COPY", reg->size, &start, &end,
reg->flags & FLAG_USE_DMA);
err_map_addr:
pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, dst_phys_addr);
pci_epf_test_print_rate(epf_test, "COPY", reg->size, &start,
&end, reg->flags & FLAG_USE_DMA);
err_dst_addr:
pci_epc_mem_free_addr(epc, dst_phys_addr, dst_addr, reg->size);
unmap:
if (map_size) {
pci_epc_mem_unmap(epc, epf->func_no, epf->vfunc_no, &dst_map);
pci_epc_mem_unmap(epc, epf->func_no, epf->vfunc_no, &src_map);
}
err_src_map_addr:
pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, src_phys_addr);
free_buf:
kfree(copy_buf);
err_src_addr:
pci_epc_mem_free_addr(epc, src_phys_addr, src_addr, reg->size);
err:
set_status:
if (!ret)
reg->status |= STATUS_COPY_SUCCESS;
else
@ -411,82 +410,89 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
static void pci_epf_test_read(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
int ret;
void __iomem *src_addr;
void *buf;
int ret = 0;
void *src_buf, *buf;
u32 crc32;
phys_addr_t phys_addr;
struct pci_epc_map map;
phys_addr_t dst_phys_addr;
struct timespec64 start, end;
struct pci_epf *epf = epf_test->epf;
struct device *dev = &epf->dev;
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
struct device *dma_dev = epf->epc->dev.parent;
u64 src_addr = reg->src_addr;
size_t src_size = reg->size;
ssize_t map_size = 0;
src_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size);
if (!src_addr) {
dev_err(dev, "Failed to allocate address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
src_buf = kzalloc(src_size, GFP_KERNEL);
if (!src_buf) {
ret = -ENOMEM;
goto err;
goto set_status;
}
buf = src_buf;
ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, phys_addr,
reg->src_addr, reg->size);
if (ret) {
dev_err(dev, "Failed to map address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
goto err_addr;
}
buf = kzalloc(reg->size, GFP_KERNEL);
if (!buf) {
ret = -ENOMEM;
goto err_map_addr;
}
if (reg->flags & FLAG_USE_DMA) {
dst_phys_addr = dma_map_single(dma_dev, buf, reg->size,
DMA_FROM_DEVICE);
if (dma_mapping_error(dma_dev, dst_phys_addr)) {
dev_err(dev, "Failed to map destination buffer addr\n");
ret = -ENOMEM;
goto err_dma_map;
while (src_size) {
ret = pci_epc_mem_map(epc, epf->func_no, epf->vfunc_no,
src_addr, src_size, &map);
if (ret) {
dev_err(dev, "Failed to map address\n");
reg->status = STATUS_SRC_ADDR_INVALID;
goto free_buf;
}
ktime_get_ts64(&start);
ret = pci_epf_test_data_transfer(epf_test, dst_phys_addr,
phys_addr, reg->size,
reg->src_addr, DMA_DEV_TO_MEM);
if (ret)
dev_err(dev, "Data transfer failed\n");
ktime_get_ts64(&end);
map_size = map.pci_size;
if (reg->flags & FLAG_USE_DMA) {
dst_phys_addr = dma_map_single(dma_dev, buf, map_size,
DMA_FROM_DEVICE);
if (dma_mapping_error(dma_dev, dst_phys_addr)) {
dev_err(dev,
"Failed to map destination buffer addr\n");
ret = -ENOMEM;
goto unmap;
}
dma_unmap_single(dma_dev, dst_phys_addr, reg->size,
DMA_FROM_DEVICE);
} else {
ktime_get_ts64(&start);
memcpy_fromio(buf, src_addr, reg->size);
ktime_get_ts64(&end);
ktime_get_ts64(&start);
ret = pci_epf_test_data_transfer(epf_test,
dst_phys_addr, map.phys_addr,
map_size, src_addr, DMA_DEV_TO_MEM);
if (ret)
dev_err(dev, "Data transfer failed\n");
ktime_get_ts64(&end);
dma_unmap_single(dma_dev, dst_phys_addr, map_size,
DMA_FROM_DEVICE);
if (ret)
goto unmap;
} else {
ktime_get_ts64(&start);
memcpy_fromio(buf, map.virt_addr, map_size);
ktime_get_ts64(&end);
}
src_size -= map_size;
src_addr += map_size;
buf += map_size;
pci_epc_mem_unmap(epc, epf->func_no, epf->vfunc_no, &map);
map_size = 0;
}
pci_epf_test_print_rate(epf_test, "READ", reg->size, &start, &end,
reg->flags & FLAG_USE_DMA);
pci_epf_test_print_rate(epf_test, "READ", reg->size, &start,
&end, reg->flags & FLAG_USE_DMA);
crc32 = crc32_le(~0, buf, reg->size);
crc32 = crc32_le(~0, src_buf, reg->size);
if (crc32 != reg->checksum)
ret = -EIO;
err_dma_map:
kfree(buf);
unmap:
if (map_size)
pci_epc_mem_unmap(epc, epf->func_no, epf->vfunc_no, &map);
err_map_addr:
pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, phys_addr);
free_buf:
kfree(src_buf);
err_addr:
pci_epc_mem_free_addr(epc, phys_addr, src_addr, reg->size);
err:
set_status:
if (!ret)
reg->status |= STATUS_READ_SUCCESS;
else
@ -496,71 +502,79 @@ static void pci_epf_test_read(struct pci_epf_test *epf_test,
static void pci_epf_test_write(struct pci_epf_test *epf_test,
struct pci_epf_test_reg *reg)
{
int ret;
void __iomem *dst_addr;
void *buf;
phys_addr_t phys_addr;
int ret = 0;
void *dst_buf, *buf;
struct pci_epc_map map;
phys_addr_t src_phys_addr;
struct timespec64 start, end;
struct pci_epf *epf = epf_test->epf;
struct device *dev = &epf->dev;
struct pci_epc *epc = epf->epc;
struct device *dev = &epf->dev;
struct device *dma_dev = epf->epc->dev.parent;
u64 dst_addr = reg->dst_addr;
size_t dst_size = reg->size;
ssize_t map_size = 0;
dst_addr = pci_epc_mem_alloc_addr(epc, &phys_addr, reg->size);
if (!dst_addr) {
dev_err(dev, "Failed to allocate address\n");
reg->status = STATUS_DST_ADDR_INVALID;
dst_buf = kzalloc(dst_size, GFP_KERNEL);
if (!dst_buf) {
ret = -ENOMEM;
goto err;
goto set_status;
}
get_random_bytes(dst_buf, dst_size);
reg->checksum = crc32_le(~0, dst_buf, dst_size);
buf = dst_buf;
ret = pci_epc_map_addr(epc, epf->func_no, epf->vfunc_no, phys_addr,
reg->dst_addr, reg->size);
if (ret) {
dev_err(dev, "Failed to map address\n");
reg->status = STATUS_DST_ADDR_INVALID;
goto err_addr;
}
buf = kzalloc(reg->size, GFP_KERNEL);
if (!buf) {
ret = -ENOMEM;
goto err_map_addr;
}
get_random_bytes(buf, reg->size);
reg->checksum = crc32_le(~0, buf, reg->size);
if (reg->flags & FLAG_USE_DMA) {
src_phys_addr = dma_map_single(dma_dev, buf, reg->size,
DMA_TO_DEVICE);
if (dma_mapping_error(dma_dev, src_phys_addr)) {
dev_err(dev, "Failed to map source buffer addr\n");
ret = -ENOMEM;
goto err_dma_map;
while (dst_size) {
ret = pci_epc_mem_map(epc, epf->func_no, epf->vfunc_no,
dst_addr, dst_size, &map);
if (ret) {
dev_err(dev, "Failed to map address\n");
reg->status = STATUS_DST_ADDR_INVALID;
goto free_buf;
}
ktime_get_ts64(&start);
map_size = map.pci_size;
if (reg->flags & FLAG_USE_DMA) {
src_phys_addr = dma_map_single(dma_dev, buf, map_size,
DMA_TO_DEVICE);
if (dma_mapping_error(dma_dev, src_phys_addr)) {
dev_err(dev,
"Failed to map source buffer addr\n");
ret = -ENOMEM;
goto unmap;
}
ret = pci_epf_test_data_transfer(epf_test, phys_addr,
src_phys_addr, reg->size,
reg->dst_addr,
DMA_MEM_TO_DEV);
if (ret)
dev_err(dev, "Data transfer failed\n");
ktime_get_ts64(&end);
ktime_get_ts64(&start);
dma_unmap_single(dma_dev, src_phys_addr, reg->size,
DMA_TO_DEVICE);
} else {
ktime_get_ts64(&start);
memcpy_toio(dst_addr, buf, reg->size);
ktime_get_ts64(&end);
ret = pci_epf_test_data_transfer(epf_test,
map.phys_addr, src_phys_addr,
map_size, dst_addr,
DMA_MEM_TO_DEV);
if (ret)
dev_err(dev, "Data transfer failed\n");
ktime_get_ts64(&end);
dma_unmap_single(dma_dev, src_phys_addr, map_size,
DMA_TO_DEVICE);
if (ret)
goto unmap;
} else {
ktime_get_ts64(&start);
memcpy_toio(map.virt_addr, buf, map_size);
ktime_get_ts64(&end);
}
dst_size -= map_size;
dst_addr += map_size;
buf += map_size;
pci_epc_mem_unmap(epc, epf->func_no, epf->vfunc_no, &map);
map_size = 0;
}
pci_epf_test_print_rate(epf_test, "WRITE", reg->size, &start, &end,
reg->flags & FLAG_USE_DMA);
pci_epf_test_print_rate(epf_test, "WRITE", reg->size, &start,
&end, reg->flags & FLAG_USE_DMA);
/*
* wait 1ms inorder for the write to complete. Without this delay L3
@ -568,16 +582,14 @@ static void pci_epf_test_write(struct pci_epf_test *epf_test,
*/
usleep_range(1000, 2000);
err_dma_map:
kfree(buf);
unmap:
if (map_size)
pci_epc_mem_unmap(epc, epf->func_no, epf->vfunc_no, &map);
err_map_addr:
pci_epc_unmap_addr(epc, epf->func_no, epf->vfunc_no, phys_addr);
free_buf:
kfree(dst_buf);
err_addr:
pci_epc_mem_free_addr(epc, phys_addr, dst_addr, reg->size);
err:
set_status:
if (!ret)
reg->status |= STATUS_WRITE_SUCCESS;
else
@ -786,7 +798,7 @@ static void pci_epf_test_epc_deinit(struct pci_epf *epf)
{
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
cancel_delayed_work(&epf_test->cmd_handler);
cancel_delayed_work_sync(&epf_test->cmd_handler);
pci_epf_test_clean_dma_chan(epf_test);
pci_epf_test_clear_bar(epf);
}
@ -917,7 +929,7 @@ static void pci_epf_test_unbind(struct pci_epf *epf)
struct pci_epf_test *epf_test = epf_get_drvdata(epf);
struct pci_epc *epc = epf->epc;
cancel_delayed_work(&epf_test->cmd_handler);
cancel_delayed_work_sync(&epf_test->cmd_handler);
if (epc->init_complete) {
pci_epf_test_clean_dma_chan(epf_test);
pci_epf_test_clear_bar(epf);

View File

@ -128,6 +128,18 @@ enum pci_barno pci_epc_get_next_free_bar(const struct pci_epc_features
}
EXPORT_SYMBOL_GPL(pci_epc_get_next_free_bar);
static bool pci_epc_function_is_valid(struct pci_epc *epc,
u8 func_no, u8 vfunc_no)
{
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return false;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return false;
return true;
}
/**
* pci_epc_get_features() - get the features supported by EPC
* @epc: the features supported by *this* EPC device will be returned
@ -145,10 +157,7 @@ const struct pci_epc_features *pci_epc_get_features(struct pci_epc *epc,
{
const struct pci_epc_features *epc_features;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return NULL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return NULL;
if (!epc->ops->get_features)
@ -218,10 +227,7 @@ int pci_epc_raise_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
{
int ret;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return -EINVAL;
if (!epc->ops->raise_irq)
@ -262,10 +268,7 @@ int pci_epc_map_msi_irq(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
{
int ret;
if (IS_ERR_OR_NULL(epc))
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return -EINVAL;
if (!epc->ops->map_msi_irq)
@ -293,10 +296,7 @@ int pci_epc_get_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{
int interrupt;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return 0;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return 0;
if (!epc->ops->get_msi)
@ -329,11 +329,10 @@ int pci_epc_set_msi(struct pci_epc *epc, u8 func_no, u8 vfunc_no, u8 interrupts)
int ret;
u8 encode_int;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions ||
interrupts < 1 || interrupts > 32)
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
if (interrupts < 1 || interrupts > 32)
return -EINVAL;
if (!epc->ops->set_msi)
@ -361,10 +360,7 @@ int pci_epc_get_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no)
{
int interrupt;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return 0;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return 0;
if (!epc->ops->get_msix)
@ -397,11 +393,10 @@ int pci_epc_set_msix(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
{
int ret;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions ||
interrupts < 1 || interrupts > 2048)
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
if (interrupts < 1 || interrupts > 2048)
return -EINVAL;
if (!epc->ops->set_msix)
@ -428,10 +423,7 @@ EXPORT_SYMBOL_GPL(pci_epc_set_msix);
void pci_epc_unmap_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
phys_addr_t phys_addr)
{
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return;
if (!epc->ops->unmap_addr)
@ -459,10 +451,7 @@ int pci_epc_map_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
{
int ret;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return -EINVAL;
if (!epc->ops->map_addr)
@ -477,6 +466,109 @@ int pci_epc_map_addr(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
}
EXPORT_SYMBOL_GPL(pci_epc_map_addr);
/**
* pci_epc_mem_map() - allocate and map a PCI address to a CPU address
* @epc: the EPC device on which the CPU address is to be allocated and mapped
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @pci_addr: PCI address to which the CPU address should be mapped
* @pci_size: the number of bytes to map starting from @pci_addr
* @map: where to return the mapping information
*
* Allocate a controller memory address region and map it to a RC PCI address
* region, taking into account the controller physical address mapping
* constraints using the controller operation align_addr(). If this operation is
* not defined, we assume that there are no alignment constraints for the
* mapping.
*
* The effective size of the PCI address range mapped from @pci_addr is
* indicated by @map->pci_size. This size may be less than the requested
* @pci_size. The local virtual CPU address for the mapping is indicated by
* @map->virt_addr (@map->phys_addr indicates the physical address).
* The size and CPU address of the controller memory allocated and mapped are
* respectively indicated by @map->map_size and @map->virt_base (and
* @map->phys_base for the physical address of @map->virt_base).
*
* Returns 0 on success and a negative error code in case of error.
*/
int pci_epc_mem_map(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
u64 pci_addr, size_t pci_size, struct pci_epc_map *map)
{
size_t map_size = pci_size;
size_t map_offset = 0;
int ret;
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return -EINVAL;
if (!pci_size || !map)
return -EINVAL;
/*
* Align the PCI address to map. If the controller defines the
* .align_addr() operation, use it to determine the PCI address to map
* and the size of the mapping. Otherwise, assume that the controller
* has no alignment constraint.
*/
memset(map, 0, sizeof(*map));
map->pci_addr = pci_addr;
if (epc->ops->align_addr)
map->map_pci_addr =
epc->ops->align_addr(epc, pci_addr,
&map_size, &map_offset);
else
map->map_pci_addr = pci_addr;
map->map_size = map_size;
if (map->map_pci_addr + map->map_size < pci_addr + pci_size)
map->pci_size = map->map_pci_addr + map->map_size - pci_addr;
else
map->pci_size = pci_size;
map->virt_base = pci_epc_mem_alloc_addr(epc, &map->phys_base,
map->map_size);
if (!map->virt_base)
return -ENOMEM;
map->phys_addr = map->phys_base + map_offset;
map->virt_addr = map->virt_base + map_offset;
ret = pci_epc_map_addr(epc, func_no, vfunc_no, map->phys_base,
map->map_pci_addr, map->map_size);
if (ret) {
pci_epc_mem_free_addr(epc, map->phys_base, map->virt_base,
map->map_size);
return ret;
}
return 0;
}
EXPORT_SYMBOL_GPL(pci_epc_mem_map);
/**
* pci_epc_mem_unmap() - unmap and free a CPU address region
* @epc: the EPC device on which the CPU address is allocated and mapped
* @func_no: the physical endpoint function number in the EPC device
* @vfunc_no: the virtual endpoint function number in the physical function
* @map: the mapping information
*
* Unmap and free a CPU address region that was allocated and mapped with
* pci_epc_mem_map().
*/
void pci_epc_mem_unmap(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epc_map *map)
{
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return;
if (!map || !map->virt_base)
return;
pci_epc_unmap_addr(epc, func_no, vfunc_no, map->phys_base);
pci_epc_mem_free_addr(epc, map->phys_base, map->virt_base,
map->map_size);
}
EXPORT_SYMBOL_GPL(pci_epc_mem_unmap);
/**
* pci_epc_clear_bar() - reset the BAR
* @epc: the EPC device for which the BAR has to be cleared
@ -489,12 +581,11 @@ EXPORT_SYMBOL_GPL(pci_epc_map_addr);
void pci_epc_clear_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
struct pci_epf_bar *epf_bar)
{
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions ||
(epf_bar->barno == BAR_5 &&
epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64))
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
if (epf_bar->barno == BAR_5 &&
epf_bar->flags & PCI_BASE_ADDRESS_MEM_TYPE_64)
return;
if (!epc->ops->clear_bar)
@ -521,18 +612,16 @@ int pci_epc_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
int ret;
int flags = epf_bar->flags;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions ||
(epf_bar->barno == BAR_5 &&
flags & PCI_BASE_ADDRESS_MEM_TYPE_64) ||
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return -EINVAL;
if ((epf_bar->barno == BAR_5 && flags & PCI_BASE_ADDRESS_MEM_TYPE_64) ||
(flags & PCI_BASE_ADDRESS_SPACE_IO &&
flags & PCI_BASE_ADDRESS_IO_MASK) ||
(upper_32_bits(epf_bar->size) &&
!(flags & PCI_BASE_ADDRESS_MEM_TYPE_64)))
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
return -EINVAL;
if (!epc->ops->set_bar)
return 0;
@ -561,10 +650,7 @@ int pci_epc_write_header(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
{
int ret;
if (IS_ERR_OR_NULL(epc) || func_no >= epc->max_functions)
return -EINVAL;
if (vfunc_no > 0 && (!epc->max_vfs || vfunc_no > epc->max_vfs[func_no]))
if (!pci_epc_function_is_valid(epc, func_no, vfunc_no))
return -EINVAL;
/* Only Virtual Function #1 has deviceID */
@ -660,18 +746,18 @@ void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
if (IS_ERR_OR_NULL(epc) || !epf)
return;
mutex_lock(&epc->list_lock);
if (type == PRIMARY_INTERFACE) {
func_no = epf->func_no;
list = &epf->list;
epf->epc = NULL;
} else {
func_no = epf->sec_epc_func_no;
list = &epf->sec_epc_list;
epf->sec_epc = NULL;
}
mutex_lock(&epc->list_lock);
clear_bit(func_no, &epc->function_num_map);
list_del(list);
epf->epc = NULL;
mutex_unlock(&epc->list_lock);
}
EXPORT_SYMBOL_GPL(pci_epc_remove_epf);
@ -837,11 +923,10 @@ EXPORT_SYMBOL_GPL(pci_epc_bus_master_enable_notify);
void pci_epc_destroy(struct pci_epc *epc)
{
pci_ep_cfs_remove_epc_group(epc->group);
device_unregister(&epc->dev);
#ifdef CONFIG_PCI_DOMAINS_GENERIC
pci_bus_release_domain_nr(&epc->dev, epc->domain_nr);
pci_bus_release_domain_nr(epc->dev.parent, epc->domain_nr);
#endif
device_unregister(&epc->dev);
}
EXPORT_SYMBOL_GPL(pci_epc_destroy);

View File

@ -178,7 +178,7 @@ EXPORT_SYMBOL_GPL(pci_epc_mem_exit);
void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
phys_addr_t *phys_addr, size_t size)
{
void __iomem *virt_addr = NULL;
void __iomem *virt_addr;
struct pci_epc_mem *mem;
unsigned int page_shift;
size_t align_size;
@ -188,10 +188,13 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
for (i = 0; i < epc->num_windows; i++) {
mem = epc->windows[i];
mutex_lock(&mem->lock);
if (size > mem->window.size)
continue;
align_size = ALIGN(size, mem->window.page_size);
order = pci_epc_mem_get_order(mem, align_size);
mutex_lock(&mem->lock);
pageno = bitmap_find_free_region(mem->bitmap, mem->pages,
order);
if (pageno >= 0) {
@ -211,7 +214,7 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
mutex_unlock(&mem->lock);
}
return virt_addr;
return NULL;
}
EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr);

View File

@ -118,6 +118,16 @@ config HOTPLUG_PCI_CPCI_GENERIC
When in doubt, say N.
config HOTPLUG_PCI_OCTEONEP
bool "Marvell OCTEON PCI Hotplug driver"
depends on HOTPLUG_PCI
help
Say Y here if you have an OCTEON PCIe device with a hotplug
controller. This driver enables the non-controller functions of the
device to be registered as hotplug slots.
When in doubt, say N.
config HOTPLUG_PCI_SHPC
bool "SHPC PCI Hotplug driver"
help

View File

@ -20,6 +20,7 @@ obj-$(CONFIG_HOTPLUG_PCI_RPA) += rpaphp.o
obj-$(CONFIG_HOTPLUG_PCI_RPA_DLPAR) += rpadlpar_io.o
obj-$(CONFIG_HOTPLUG_PCI_ACPI) += acpiphp.o
obj-$(CONFIG_HOTPLUG_PCI_S390) += s390_pci_hpc.o
obj-$(CONFIG_HOTPLUG_PCI_OCTEONEP) += octep_hp.o
# acpiphp_ibm extends acpiphp, so should be linked afterwards.

View File

@ -119,7 +119,7 @@ static struct platform_driver altra_led_driver = {
.acpi_match_table = altra_led_ids,
},
.probe = altra_led_probe,
.remove_new = altra_led_remove,
.remove = altra_led_remove,
};
module_platform_driver(altra_led_driver);

View File

@ -44,7 +44,6 @@ struct cpci_hp_controller_ops {
int (*enable_irq)(void);
int (*disable_irq)(void);
int (*check_irq)(void *dev_id);
int (*hardware_test)(struct slot *slot, u32 value);
u8 (*get_power)(struct slot *slot);
int (*set_power)(struct slot *slot, int value);
};

View File

@ -12,8 +12,11 @@
*
*/
#define pr_fmt(fmt) "cpqphp: " fmt
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/printk.h>
#include <linux/types.h>
#include <linux/slab.h>
#include <linux/workqueue.h>
@ -132,18 +135,6 @@ int cpqhp_unconfigure_device(struct pci_func *func)
return 0;
}
static int PCI_RefinedAccessConfig(struct pci_bus *bus, unsigned int devfn, u8 offset, u32 *value)
{
u32 vendID = 0;
if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID) == -1)
return -1;
if (PCI_POSSIBLE_ERROR(vendID))
return -1;
return pci_bus_read_config_dword(bus, devfn, offset, value);
}
/*
* cpqhp_set_irq
*
@ -202,13 +193,16 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
{
u16 tdevice;
u32 work;
u8 tbus;
int ret = -1;
ctrl->pci_bus->number = bus_num;
for (tdevice = 0; tdevice < 0xFF; tdevice++) {
/* Scan for access first */
if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
if (!pci_bus_read_dev_vendor_id(ctrl->pci_bus, tdevice, &work, 0))
continue;
ret = pci_bus_read_config_dword(ctrl->pci_bus, tdevice, PCI_CLASS_REVISION, &work);
if (ret)
continue;
dbg("Looking for nonbridge bus_num %d dev_num %d\n", bus_num, tdevice);
/* Yep we got one. Not a bridge ? */
@ -216,23 +210,20 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
*dev_num = tdevice;
dbg("found it !\n");
return 0;
}
}
for (tdevice = 0; tdevice < 0xFF; tdevice++) {
/* Scan for access first */
if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
continue;
dbg("Looking for bridge bus_num %d dev_num %d\n", bus_num, tdevice);
/* Yep we got one. bridge ? */
if ((work >> 8) == PCI_TO_PCI_BRIDGE_CLASS) {
pci_bus_read_config_byte(ctrl->pci_bus, PCI_DEVFN(tdevice, 0), PCI_SECONDARY_BUS, &tbus);
/* XXX: no recursion, wtf? */
dbg("Recurse on bus_num %d tdevice %d\n", tbus, tdevice);
return 0;
} else {
/*
* XXX: Code whose debug printout indicated
* recursion to buses underneath bridges might be
* necessary was removed because it never did
* any recursion.
*/
ret = 0;
pr_warn("missing feature: bridge scan recursion not implemented\n");
}
}
return -1;
return ret;
}

View File

@ -123,7 +123,6 @@ static int spew_debug_info(struct controller *ctrl, char *data, int size)
struct ctrl_dbg {
int size;
char *data;
struct controller *ctrl;
};
#define MAX_OUTPUT (4*PAGE_SIZE)

View File

@ -0,0 +1,427 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (C) 2024 Marvell. */
#include <linux/cleanup.h>
#include <linux/container_of.h>
#include <linux/delay.h>
#include <linux/dev_printk.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/pci_hotplug.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#define OCTEP_HP_INTR_OFFSET(x) (0x20400 + ((x) << 4))
#define OCTEP_HP_INTR_VECTOR(x) (16 + (x))
#define OCTEP_HP_DRV_NAME "octep_hp"
/*
* Type of MSI-X interrupts. OCTEP_HP_INTR_VECTOR() and
* OCTEP_HP_INTR_OFFSET() generate the vector and offset for an interrupt
* type.
*/
enum octep_hp_intr_type {
OCTEP_HP_INTR_INVALID = -1,
OCTEP_HP_INTR_ENA = 0,
OCTEP_HP_INTR_DIS = 1,
OCTEP_HP_INTR_MAX = 2,
};
struct octep_hp_cmd {
struct list_head list;
enum octep_hp_intr_type intr_type;
u64 intr_val;
};
struct octep_hp_slot {
struct list_head list;
struct hotplug_slot slot;
u16 slot_number;
struct pci_dev *hp_pdev;
unsigned int hp_devfn;
struct octep_hp_controller *ctrl;
};
struct octep_hp_intr_info {
enum octep_hp_intr_type type;
int number;
char name[16];
};
struct octep_hp_controller {
void __iomem *base;
struct pci_dev *pdev;
struct octep_hp_intr_info intr[OCTEP_HP_INTR_MAX];
struct work_struct work;
struct list_head slot_list;
struct mutex slot_lock; /* Protects slot_list */
struct list_head hp_cmd_list;
spinlock_t hp_cmd_lock; /* Protects hp_cmd_list */
};
static void octep_hp_enable_pdev(struct octep_hp_controller *hp_ctrl,
struct octep_hp_slot *hp_slot)
{
guard(mutex)(&hp_ctrl->slot_lock);
if (hp_slot->hp_pdev) {
pci_dbg(hp_slot->hp_pdev, "Slot %s is already enabled\n",
hotplug_slot_name(&hp_slot->slot));
return;
}
/* Scan the device and add it to the bus */
hp_slot->hp_pdev = pci_scan_single_device(hp_ctrl->pdev->bus,
hp_slot->hp_devfn);
pci_bus_assign_resources(hp_ctrl->pdev->bus);
pci_bus_add_device(hp_slot->hp_pdev);
dev_dbg(&hp_slot->hp_pdev->dev, "Enabled slot %s\n",
hotplug_slot_name(&hp_slot->slot));
}
static void octep_hp_disable_pdev(struct octep_hp_controller *hp_ctrl,
struct octep_hp_slot *hp_slot)
{
guard(mutex)(&hp_ctrl->slot_lock);
if (!hp_slot->hp_pdev) {
pci_dbg(hp_ctrl->pdev, "Slot %s is already disabled\n",
hotplug_slot_name(&hp_slot->slot));
return;
}
pci_dbg(hp_slot->hp_pdev, "Disabling slot %s\n",
hotplug_slot_name(&hp_slot->slot));
/* Remove the device from the bus */
pci_stop_and_remove_bus_device_locked(hp_slot->hp_pdev);
hp_slot->hp_pdev = NULL;
}
static int octep_hp_enable_slot(struct hotplug_slot *slot)
{
struct octep_hp_slot *hp_slot =
container_of(slot, struct octep_hp_slot, slot);
octep_hp_enable_pdev(hp_slot->ctrl, hp_slot);
return 0;
}
static int octep_hp_disable_slot(struct hotplug_slot *slot)
{
struct octep_hp_slot *hp_slot =
container_of(slot, struct octep_hp_slot, slot);
octep_hp_disable_pdev(hp_slot->ctrl, hp_slot);
return 0;
}
static struct hotplug_slot_ops octep_hp_slot_ops = {
.enable_slot = octep_hp_enable_slot,
.disable_slot = octep_hp_disable_slot,
};
#define SLOT_NAME_SIZE 16
static struct octep_hp_slot *
octep_hp_register_slot(struct octep_hp_controller *hp_ctrl,
struct pci_dev *pdev, u16 slot_number)
{
char slot_name[SLOT_NAME_SIZE];
struct octep_hp_slot *hp_slot;
int ret;
hp_slot = kzalloc(sizeof(*hp_slot), GFP_KERNEL);
if (!hp_slot)
return ERR_PTR(-ENOMEM);
hp_slot->ctrl = hp_ctrl;
hp_slot->hp_pdev = pdev;
hp_slot->hp_devfn = pdev->devfn;
hp_slot->slot_number = slot_number;
hp_slot->slot.ops = &octep_hp_slot_ops;
snprintf(slot_name, sizeof(slot_name), "octep_hp_%u", slot_number);
ret = pci_hp_register(&hp_slot->slot, hp_ctrl->pdev->bus,
PCI_SLOT(pdev->devfn), slot_name);
if (ret) {
kfree(hp_slot);
return ERR_PTR(ret);
}
pci_info(pdev, "Registered slot %s for device %s\n",
slot_name, pci_name(pdev));
list_add_tail(&hp_slot->list, &hp_ctrl->slot_list);
octep_hp_disable_pdev(hp_ctrl, hp_slot);
return hp_slot;
}
static void octep_hp_deregister_slot(void *data)
{
struct octep_hp_slot *hp_slot = data;
struct octep_hp_controller *hp_ctrl = hp_slot->ctrl;
pci_hp_deregister(&hp_slot->slot);
octep_hp_enable_pdev(hp_ctrl, hp_slot);
list_del(&hp_slot->list);
kfree(hp_slot);
}
static const char *octep_hp_cmd_name(enum octep_hp_intr_type type)
{
switch (type) {
case OCTEP_HP_INTR_ENA:
return "hotplug enable";
case OCTEP_HP_INTR_DIS:
return "hotplug disable";
default:
return "invalid";
}
}
static void octep_hp_cmd_handler(struct octep_hp_controller *hp_ctrl,
struct octep_hp_cmd *hp_cmd)
{
struct octep_hp_slot *hp_slot;
/*
* Enable or disable the slots based on the slot mask.
* intr_val is a bit mask where each bit represents a slot.
*/
list_for_each_entry(hp_slot, &hp_ctrl->slot_list, list) {
if (!(hp_cmd->intr_val & BIT(hp_slot->slot_number)))
continue;
pci_info(hp_ctrl->pdev, "Received %s command for slot %s\n",
octep_hp_cmd_name(hp_cmd->intr_type),
hotplug_slot_name(&hp_slot->slot));
switch (hp_cmd->intr_type) {
case OCTEP_HP_INTR_ENA:
octep_hp_enable_pdev(hp_ctrl, hp_slot);
break;
case OCTEP_HP_INTR_DIS:
octep_hp_disable_pdev(hp_ctrl, hp_slot);
break;
default:
break;
}
}
}
static void octep_hp_work_handler(struct work_struct *work)
{
struct octep_hp_controller *hp_ctrl;
struct octep_hp_cmd *hp_cmd;
unsigned long flags;
hp_ctrl = container_of(work, struct octep_hp_controller, work);
/* Process all the hotplug commands */
spin_lock_irqsave(&hp_ctrl->hp_cmd_lock, flags);
while (!list_empty(&hp_ctrl->hp_cmd_list)) {
hp_cmd = list_first_entry(&hp_ctrl->hp_cmd_list,
struct octep_hp_cmd, list);
list_del(&hp_cmd->list);
spin_unlock_irqrestore(&hp_ctrl->hp_cmd_lock, flags);
octep_hp_cmd_handler(hp_ctrl, hp_cmd);
kfree(hp_cmd);
spin_lock_irqsave(&hp_ctrl->hp_cmd_lock, flags);
}
spin_unlock_irqrestore(&hp_ctrl->hp_cmd_lock, flags);
}
static enum octep_hp_intr_type octep_hp_intr_type(struct octep_hp_intr_info *intr,
int irq)
{
enum octep_hp_intr_type type;
for (type = OCTEP_HP_INTR_ENA; type < OCTEP_HP_INTR_MAX; type++) {
if (intr[type].number == irq)
return type;
}
return OCTEP_HP_INTR_INVALID;
}
static irqreturn_t octep_hp_intr_handler(int irq, void *data)
{
struct octep_hp_controller *hp_ctrl = data;
struct pci_dev *pdev = hp_ctrl->pdev;
enum octep_hp_intr_type type;
struct octep_hp_cmd *hp_cmd;
u64 intr_val;
type = octep_hp_intr_type(hp_ctrl->intr, irq);
if (type == OCTEP_HP_INTR_INVALID) {
pci_err(pdev, "Invalid interrupt %d\n", irq);
return IRQ_HANDLED;
}
/* Read and clear the interrupt */
intr_val = readq(hp_ctrl->base + OCTEP_HP_INTR_OFFSET(type));
writeq(intr_val, hp_ctrl->base + OCTEP_HP_INTR_OFFSET(type));
hp_cmd = kzalloc(sizeof(*hp_cmd), GFP_ATOMIC);
if (!hp_cmd)
return IRQ_HANDLED;
hp_cmd->intr_val = intr_val;
hp_cmd->intr_type = type;
/* Add the command to the list and schedule the work */
spin_lock(&hp_ctrl->hp_cmd_lock);
list_add_tail(&hp_cmd->list, &hp_ctrl->hp_cmd_list);
spin_unlock(&hp_ctrl->hp_cmd_lock);
schedule_work(&hp_ctrl->work);
return IRQ_HANDLED;
}
static void octep_hp_irq_cleanup(void *data)
{
struct octep_hp_controller *hp_ctrl = data;
pci_free_irq_vectors(hp_ctrl->pdev);
flush_work(&hp_ctrl->work);
}
static int octep_hp_request_irq(struct octep_hp_controller *hp_ctrl,
enum octep_hp_intr_type type)
{
struct pci_dev *pdev = hp_ctrl->pdev;
struct octep_hp_intr_info *intr;
int irq;
irq = pci_irq_vector(pdev, OCTEP_HP_INTR_VECTOR(type));
if (irq < 0)
return irq;
intr = &hp_ctrl->intr[type];
intr->number = irq;
intr->type = type;
snprintf(intr->name, sizeof(intr->name), "octep_hp_%d", type);
return devm_request_irq(&pdev->dev, irq, octep_hp_intr_handler,
IRQF_SHARED, intr->name, hp_ctrl);
}
static int octep_hp_controller_setup(struct pci_dev *pdev,
struct octep_hp_controller *hp_ctrl)
{
struct device *dev = &pdev->dev;
enum octep_hp_intr_type type;
int ret;
ret = pcim_enable_device(pdev);
if (ret)
return dev_err_probe(dev, ret, "Failed to enable PCI device\n");
hp_ctrl->base = pcim_iomap_region(pdev, 0, OCTEP_HP_DRV_NAME);
if (IS_ERR(hp_ctrl->base))
return dev_err_probe(dev, PTR_ERR(hp_ctrl->base),
"Failed to map PCI device region\n");
pci_set_master(pdev);
pci_set_drvdata(pdev, hp_ctrl);
INIT_LIST_HEAD(&hp_ctrl->slot_list);
INIT_LIST_HEAD(&hp_ctrl->hp_cmd_list);
mutex_init(&hp_ctrl->slot_lock);
spin_lock_init(&hp_ctrl->hp_cmd_lock);
INIT_WORK(&hp_ctrl->work, octep_hp_work_handler);
hp_ctrl->pdev = pdev;
ret = pci_alloc_irq_vectors(pdev, 1,
OCTEP_HP_INTR_VECTOR(OCTEP_HP_INTR_MAX),
PCI_IRQ_MSIX);
if (ret < 0)
return dev_err_probe(dev, ret, "Failed to alloc MSI-X vectors\n");
ret = devm_add_action(&pdev->dev, octep_hp_irq_cleanup, hp_ctrl);
if (ret)
return dev_err_probe(&pdev->dev, ret, "Failed to add IRQ cleanup action\n");
for (type = OCTEP_HP_INTR_ENA; type < OCTEP_HP_INTR_MAX; type++) {
ret = octep_hp_request_irq(hp_ctrl, type);
if (ret)
return dev_err_probe(dev, ret,
"Failed to request IRQ for vector %d\n",
OCTEP_HP_INTR_VECTOR(type));
}
return 0;
}
static int octep_hp_pci_probe(struct pci_dev *pdev,
const struct pci_device_id *id)
{
struct octep_hp_controller *hp_ctrl;
struct pci_dev *tmp_pdev, *next;
struct octep_hp_slot *hp_slot;
u16 slot_number = 0;
int ret;
hp_ctrl = devm_kzalloc(&pdev->dev, sizeof(*hp_ctrl), GFP_KERNEL);
if (!hp_ctrl)
return -ENOMEM;
ret = octep_hp_controller_setup(pdev, hp_ctrl);
if (ret)
return ret;
/*
* Register all hotplug slots. Hotplug controller is the first function
* of the PCI device. The hotplug slots are the remaining functions of
* the PCI device. The hotplug slot functions are logically removed from
* the bus during probing and are re-enabled by the driver when a
* hotplug event is received.
*/
list_for_each_entry_safe(tmp_pdev, next, &pdev->bus->devices, bus_list) {
if (tmp_pdev == pdev)
continue;
hp_slot = octep_hp_register_slot(hp_ctrl, tmp_pdev, slot_number);
if (IS_ERR(hp_slot))
return dev_err_probe(&pdev->dev, PTR_ERR(hp_slot),
"Failed to register hotplug slot %u\n",
slot_number);
ret = devm_add_action(&pdev->dev, octep_hp_deregister_slot,
hp_slot);
if (ret)
return dev_err_probe(&pdev->dev, ret,
"Failed to add action for deregistering slot %u\n",
slot_number);
slot_number++;
}
return 0;
}
#define PCI_DEVICE_ID_CAVIUM_OCTEP_HP_CTLR 0xa0e3
static struct pci_device_id octep_hp_pci_map[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVICE_ID_CAVIUM_OCTEP_HP_CTLR) },
{ },
};
static struct pci_driver octep_hp = {
.name = OCTEP_HP_DRV_NAME,
.id_table = octep_hp_pci_map,
.probe = octep_hp_pci_probe,
};
module_pci_driver(octep_hp);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Marvell");
MODULE_DESCRIPTION("Marvell OCTEON PCI Hotplug driver");

View File

@ -388,8 +388,8 @@ static struct hotplug_slot *get_slot_from_name(const char *name)
/**
* __pci_hp_register - register a hotplug_slot with the PCI hotplug subsystem
* @bus: bus this slot is on
* @slot: pointer to the &struct hotplug_slot to register
* @bus: bus this slot is on
* @devnr: device number
* @name: name registered with kobject core
* @owner: caller module owner
@ -498,8 +498,6 @@ EXPORT_SYMBOL_GPL(pci_hp_add);
*
* The @slot must have been registered with the pci hotplug subsystem
* previously with a call to pci_hp_register().
*
* Returns 0 if successful, anything else for an error.
*/
void pci_hp_deregister(struct hotplug_slot *slot)
{
@ -513,8 +511,6 @@ EXPORT_SYMBOL_GPL(pci_hp_deregister);
* @slot: pointer to the &struct hotplug_slot to unpublish
*
* Remove a hotplug slot's sysfs interface.
*
* Returns 0 on success or a negative int on error.
*/
void pci_hp_del(struct hotplug_slot *slot)
{
@ -545,8 +541,6 @@ EXPORT_SYMBOL_GPL(pci_hp_del);
* the driver may no longer invoke hotplug_slot_name() to get the slot's
* unique name. The driver no longer needs to handle a ->reset_slot callback
* from this point on.
*
* Returns 0 on success or a negative int on error.
*/
void pci_hp_destroy(struct hotplug_slot *slot)
{

View File

@ -19,6 +19,8 @@
#include <linux/types.h>
#include <linux/pm_runtime.h>
#include <linux/pci.h>
#include "../pci.h"
#include "pciehp.h"
/* The following routines constitute the bulk of the
@ -127,6 +129,9 @@ static void remove_board(struct controller *ctrl, bool safe_removal)
pciehp_set_indicators(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
INDICATOR_NOOP);
/* Don't carry LBMS indications across */
pcie_reset_lbms_count(ctrl->pcie->port);
}
static int pciehp_enable_slot(struct controller *ctrl);

View File

@ -319,7 +319,7 @@ int pciehp_check_link_status(struct controller *ctrl)
return -1;
}
pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status);
__pcie_update_link_speed(ctrl->pcie->port->subordinate, lnk_status);
if (!found) {
ctrl_info(ctrl, "Slot(%s): No device found\n",

View File

@ -327,8 +327,8 @@ int pci_iov_add_virtfn(struct pci_dev *dev, int id)
virtfn->resource[i].name = pci_name(virtfn);
virtfn->resource[i].flags = res->flags;
size = pci_iov_resource_size(dev, i + PCI_IOV_RESOURCES);
virtfn->resource[i].start = res->start + size * id;
virtfn->resource[i].end = virtfn->resource[i].start + size - 1;
resource_set_range(&virtfn->resource[i],
res->start + size * id, size);
rc = request_resource(res, &virtfn->resource[i]);
BUG_ON(rc);
}
@ -804,7 +804,7 @@ static int sriov_init(struct pci_dev *dev, int pos)
goto failed;
}
iov->barsz[i] = resource_size(res);
res->end = res->start + resource_size(res) * total - 1;
resource_set_size(res, resource_size(res) * total);
pci_info(dev, "%s %pR: contains BAR %d for %d VFs\n",
res_name, res, i, total);
i += bar64;

View File

@ -728,6 +728,33 @@ void of_pci_make_dev_node(struct pci_dev *pdev)
}
#endif
/**
* of_pci_supply_present() - Check if the power supply is present for the PCI
* device
* @np: Device tree node
*
* Check if the power supply for the PCI device is present in the device tree
* node or not.
*
* Return: true if at least one power supply exists; false otherwise.
*/
bool of_pci_supply_present(struct device_node *np)
{
struct property *prop;
char *supply;
if (!np)
return false;
for_each_property_of_node(np, prop) {
supply = strrchr(prop->name, '-');
if (supply && !strcmp(supply, "-supply"))
return true;
}
return false;
}
#endif /* CONFIG_PCI */
/**

View File

@ -126,7 +126,7 @@ static int of_pci_prop_ranges(struct pci_dev *pdev, struct of_changeset *ocs,
if (of_pci_get_addr_flags(&res[j], &flags))
continue;
val64 = res[j].start;
val64 = pci_bus_address(pdev, &res[j] - pdev->resource);
of_pci_set_address(pdev, rp[i].parent_addr, val64, 0, flags,
false);
if (pci_is_bridge(pdev)) {

View File

@ -521,6 +521,31 @@ static ssize_t bus_rescan_store(struct device *dev,
static struct device_attribute dev_attr_bus_rescan = __ATTR(rescan, 0200, NULL,
bus_rescan_store);
static ssize_t reset_subordinate_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct pci_bus *bus = pdev->subordinate;
unsigned long val;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
if (kstrtoul(buf, 0, &val) < 0)
return -EINVAL;
if (val) {
int ret = __pci_reset_bus(bus);
if (ret)
return ret;
}
return count;
}
static DEVICE_ATTR_WO(reset_subordinate);
#if defined(CONFIG_PM) && defined(CONFIG_ACPI)
static ssize_t d3cold_allowed_store(struct device *dev,
struct device_attribute *attr,
@ -625,6 +650,7 @@ static struct attribute *pci_dev_attrs[] = {
static struct attribute *pci_bridge_attrs[] = {
&dev_attr_subordinate_bus_number.attr,
&dev_attr_secondary_bus_number.attr,
&dev_attr_reset_subordinate.attr,
NULL,
};

View File

@ -1832,6 +1832,7 @@ int pci_save_state(struct pci_dev *dev)
pci_save_dpc_state(dev);
pci_save_aer_state(dev);
pci_save_ptm_state(dev);
pci_save_tph_state(dev);
return pci_save_vc_state(dev);
}
EXPORT_SYMBOL(pci_save_state);
@ -1937,6 +1938,7 @@ void pci_restore_state(struct pci_dev *dev)
pci_restore_rebar_state(dev);
pci_restore_dpc_state(dev);
pci_restore_ptm_state(dev);
pci_restore_tph_state(dev);
pci_aer_clear_status(dev);
pci_restore_aer_state(dev);
@ -4744,7 +4746,7 @@ int pcie_retrain_link(struct pci_dev *pdev, bool use_lt)
* to track link speed or width changes made by hardware itself
* in attempt to correct unreliable link operation.
*/
pcie_capability_write_word(pdev, PCI_EXP_LNKSTA, PCI_EXP_LNKSTA_LBMS);
pcie_reset_lbms_count(pdev);
return rc;
}
@ -5162,6 +5164,8 @@ static void pci_dev_save_and_disable(struct pci_dev *dev)
*/
if (err_handler && err_handler->reset_prepare)
err_handler->reset_prepare(dev);
else if (dev->driver)
pci_warn(dev, "resetting");
/*
* Wake-up device prior to save. PM registers default to D0 after
@ -5195,6 +5199,8 @@ static void pci_dev_restore(struct pci_dev *dev)
*/
if (err_handler && err_handler->reset_done)
err_handler->reset_done(dev);
else if (dev->driver)
pci_warn(dev, "reset done");
}
/* dev->reset_methods[] is a 0-terminated list of indices into this array */
@ -5248,7 +5254,7 @@ static ssize_t reset_method_store(struct device *dev,
const char *buf, size_t count)
{
struct pci_dev *pdev = to_pci_dev(dev);
char *options, *name;
char *options, *tmp_options, *name;
int m, n;
u8 reset_methods[PCI_NUM_RESET_METHODS] = { 0 };
@ -5268,7 +5274,8 @@ static ssize_t reset_method_store(struct device *dev,
return -ENOMEM;
n = 0;
while ((name = strsep(&options, " ")) != NULL) {
tmp_options = options;
while ((name = strsep(&tmp_options, " ")) != NULL) {
if (sysfs_streq(name, ""))
continue;
@ -5884,7 +5891,7 @@ EXPORT_SYMBOL_GPL(pci_probe_reset_bus);
*
* Same as above except return -EAGAIN if the bus cannot be locked
*/
static int __pci_reset_bus(struct pci_bus *bus)
int __pci_reset_bus(struct pci_bus *bus)
{
int rc;
@ -6192,39 +6199,65 @@ u32 pcie_bandwidth_available(struct pci_dev *dev, struct pci_dev **limiting_dev,
}
EXPORT_SYMBOL(pcie_bandwidth_available);
/**
* pcie_get_supported_speeds - query Supported Link Speed Vector
* @dev: PCI device to query
*
* Query @dev supported link speeds.
*
* Implementation Note in PCIe r6.0 sec 7.5.3.18 recommends determining
* supported link speeds using the Supported Link Speeds Vector in the Link
* Capabilities 2 Register (when available).
*
* Link Capabilities 2 was added in PCIe r3.0, sec 7.8.18.
*
* Without Link Capabilities 2, i.e., prior to PCIe r3.0, Supported Link
* Speeds field in Link Capabilities is used and only 2.5 GT/s and 5.0 GT/s
* speeds were defined.
*
* For @dev without Supported Link Speed Vector, the field is synthesized
* from the Max Link Speed field in the Link Capabilities Register.
*
* Return: Supported Link Speeds Vector (+ reserved 0 at LSB).
*/
u8 pcie_get_supported_speeds(struct pci_dev *dev)
{
u32 lnkcap2, lnkcap;
u8 speeds;
/*
* Speeds retain the reserved 0 at LSB before PCIe Supported Link
* Speeds Vector to allow using SLS Vector bit defines directly.
*/
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2);
speeds = lnkcap2 & PCI_EXP_LNKCAP2_SLS;
/* PCIe r3.0-compliant */
if (speeds)
return speeds;
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap);
/* Synthesize from the Max Link Speed field */
if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_5_0GB)
speeds = PCI_EXP_LNKCAP2_SLS_5_0GB | PCI_EXP_LNKCAP2_SLS_2_5GB;
else if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_2_5GB)
speeds = PCI_EXP_LNKCAP2_SLS_2_5GB;
return speeds;
}
/**
* pcie_get_speed_cap - query for the PCI device's link speed capability
* @dev: PCI device to query
*
* Query the PCI device speed capability. Return the maximum link speed
* supported by the device.
* Query the PCI device speed capability.
*
* Return: the maximum link speed supported by the device.
*/
enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev)
{
u32 lnkcap2, lnkcap;
/*
* Link Capabilities 2 was added in PCIe r3.0, sec 7.8.18. The
* implementation note there recommends using the Supported Link
* Speeds Vector in Link Capabilities 2 when supported.
*
* Without Link Capabilities 2, i.e., prior to PCIe r3.0, software
* should use the Supported Link Speeds field in Link Capabilities,
* where only 2.5 GT/s and 5.0 GT/s speeds were defined.
*/
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2);
/* PCIe r3.0-compliant */
if (lnkcap2)
return PCIE_LNKCAP2_SLS2SPEED(lnkcap2);
pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap);
if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_5_0GB)
return PCIE_SPEED_5_0GT;
else if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_2_5GB)
return PCIE_SPEED_2_5GT;
return PCI_SPEED_UNKNOWN;
return PCIE_LNKCAP2_SLS2SPEED(dev->supported_speeds);
}
EXPORT_SYMBOL(pcie_get_speed_cap);
@ -6653,8 +6686,7 @@ static void pci_request_resource_alignment(struct pci_dev *dev, int bar,
} else {
r->flags &= ~IORESOURCE_SIZEALIGN;
r->flags |= IORESOURCE_STARTALIGN;
r->start = align;
r->end = r->start + size - 1;
resource_set_range(r, align, size);
}
r->flags |= IORESOURCE_UNSET;
}
@ -6900,6 +6932,8 @@ static int __init pci_setup(char *str)
pci_no_domains();
} else if (!strncmp(str, "noari", 5)) {
pcie_ari_disabled = true;
} else if (!strncmp(str, "notph", 5)) {
pci_no_tph();
} else if (!strncmp(str, "cbiosize=", 9)) {
pci_cardbus_io_size = memparse(str + 9, &str);
} else if (!strncmp(str, "cbmemsize=", 10)) {

View File

@ -104,6 +104,7 @@ bool pci_reset_supported(struct pci_dev *dev);
void pci_init_reset_methods(struct pci_dev *dev);
int pci_bridge_secondary_bus_reset(struct pci_dev *dev);
int pci_bus_error_reset(struct pci_dev *dev);
int __pci_reset_bus(struct pci_bus *bus);
struct pci_cap_saved_data {
u16 cap_nr;
@ -323,6 +324,9 @@ void __pci_bus_assign_resources(const struct pci_bus *bus,
struct list_head *realloc_head,
struct list_head *fail_head);
bool pci_bus_clip_resource(struct pci_dev *dev, int idx);
void pci_walk_bus_locked(struct pci_bus *top,
int (*cb)(struct pci_dev *, void *),
void *userdata);
const char *pci_resource_name(struct pci_dev *dev, unsigned int i);
@ -331,6 +335,17 @@ void pci_disable_bridge_window(struct pci_dev *dev);
struct pci_bus *pci_bus_get(struct pci_bus *bus);
void pci_bus_put(struct pci_bus *bus);
#define PCIE_LNKCAP_SLS2SPEED(lnkcap) \
({ \
((lnkcap) == PCI_EXP_LNKCAP_SLS_64_0GB ? PCIE_SPEED_64_0GT : \
(lnkcap) == PCI_EXP_LNKCAP_SLS_32_0GB ? PCIE_SPEED_32_0GT : \
(lnkcap) == PCI_EXP_LNKCAP_SLS_16_0GB ? PCIE_SPEED_16_0GT : \
(lnkcap) == PCI_EXP_LNKCAP_SLS_8_0GB ? PCIE_SPEED_8_0GT : \
(lnkcap) == PCI_EXP_LNKCAP_SLS_5_0GB ? PCIE_SPEED_5_0GT : \
(lnkcap) == PCI_EXP_LNKCAP_SLS_2_5GB ? PCIE_SPEED_2_5GT : \
PCI_SPEED_UNKNOWN); \
})
/* PCIe link information from Link Capabilities 2 */
#define PCIE_LNKCAP2_SLS2SPEED(lnkcap2) \
((lnkcap2) & PCI_EXP_LNKCAP2_SLS_64_0GB ? PCIE_SPEED_64_0GT : \
@ -341,6 +356,15 @@ void pci_bus_put(struct pci_bus *bus);
(lnkcap2) & PCI_EXP_LNKCAP2_SLS_2_5GB ? PCIE_SPEED_2_5GT : \
PCI_SPEED_UNKNOWN)
#define PCIE_LNKCTL2_TLS2SPEED(lnkctl2) \
((lnkctl2) == PCI_EXP_LNKCTL2_TLS_64_0GT ? PCIE_SPEED_64_0GT : \
(lnkctl2) == PCI_EXP_LNKCTL2_TLS_32_0GT ? PCIE_SPEED_32_0GT : \
(lnkctl2) == PCI_EXP_LNKCTL2_TLS_16_0GT ? PCIE_SPEED_16_0GT : \
(lnkctl2) == PCI_EXP_LNKCTL2_TLS_8_0GT ? PCIE_SPEED_8_0GT : \
(lnkctl2) == PCI_EXP_LNKCTL2_TLS_5_0GT ? PCIE_SPEED_5_0GT : \
(lnkctl2) == PCI_EXP_LNKCTL2_TLS_2_5GT ? PCIE_SPEED_2_5GT : \
PCI_SPEED_UNKNOWN)
/* PCIe speed to Mb/s reduced by encoding overhead */
#define PCIE_SPEED2MBS_ENC(speed) \
((speed) == PCIE_SPEED_64_0GT ? 64000*1/1 : \
@ -373,12 +397,16 @@ static inline int pcie_dev_speed_mbps(enum pci_bus_speed speed)
return -EINVAL;
}
u8 pcie_get_supported_speeds(struct pci_dev *dev);
const char *pci_speed_string(enum pci_bus_speed speed);
enum pci_bus_speed pcie_get_speed_cap(struct pci_dev *dev);
enum pcie_link_width pcie_get_width_cap(struct pci_dev *dev);
void __pcie_print_link_status(struct pci_dev *dev, bool verbose);
void pcie_report_downtraining(struct pci_dev *dev);
void pcie_update_link_speed(struct pci_bus *bus, u16 link_status);
static inline void __pcie_update_link_speed(struct pci_bus *bus, u16 linksta)
{
bus->cur_bus_speed = pcie_link_speed[linksta & PCI_EXP_LNKSTA_CLS];
}
void pcie_update_link_speed(struct pci_bus *bus);
/* Single Root I/O Virtualization */
struct pci_sriov {
@ -469,10 +497,18 @@ static inline int pci_dev_set_disconnected(struct pci_dev *dev, void *unused)
#define PCI_DEV_ADDED 0
#define PCI_DPC_RECOVERED 1
#define PCI_DPC_RECOVERING 2
#define PCI_DEV_REMOVED 3
static inline void pci_dev_assign_added(struct pci_dev *dev, bool added)
static inline void pci_dev_assign_added(struct pci_dev *dev)
{
assign_bit(PCI_DEV_ADDED, &dev->priv_flags, added);
smp_mb__before_atomic();
set_bit(PCI_DEV_ADDED, &dev->priv_flags);
smp_mb__after_atomic();
}
static inline bool pci_dev_test_and_clear_added(struct pci_dev *dev)
{
return test_and_clear_bit(PCI_DEV_ADDED, &dev->priv_flags);
}
static inline bool pci_dev_is_added(const struct pci_dev *dev)
@ -480,6 +516,11 @@ static inline bool pci_dev_is_added(const struct pci_dev *dev)
return test_bit(PCI_DEV_ADDED, &dev->priv_flags);
}
static inline bool pci_dev_test_and_set_removed(struct pci_dev *dev)
{
return test_and_set_bit(PCI_DEV_REMOVED, &dev->priv_flags);
}
#ifdef CONFIG_PCIEAER
#include <linux/aer.h>
@ -597,6 +638,18 @@ static inline int pci_iov_bus_range(struct pci_bus *bus)
#endif /* CONFIG_PCI_IOV */
#ifdef CONFIG_PCIE_TPH
void pci_restore_tph_state(struct pci_dev *dev);
void pci_save_tph_state(struct pci_dev *dev);
void pci_no_tph(void);
void pci_tph_init(struct pci_dev *dev);
#else
static inline void pci_restore_tph_state(struct pci_dev *dev) { }
static inline void pci_save_tph_state(struct pci_dev *dev) { }
static inline void pci_no_tph(void) { }
static inline void pci_tph_init(struct pci_dev *dev) { }
#endif
#ifdef CONFIG_PCIE_PTM
void pci_ptm_init(struct pci_dev *dev);
void pci_save_ptm_state(struct pci_dev *dev);
@ -692,6 +745,17 @@ static inline void pcie_set_ecrc_checking(struct pci_dev *dev) { }
static inline void pcie_ecrc_get_policy(char *str) { }
#endif
#ifdef CONFIG_PCIEPORTBUS
void pcie_reset_lbms_count(struct pci_dev *port);
int pcie_lbms_count(struct pci_dev *port, unsigned long *val);
#else
static inline void pcie_reset_lbms_count(struct pci_dev *port) {}
static inline int pcie_lbms_count(struct pci_dev *port, unsigned long *val)
{
return -EOPNOTSUPP;
}
#endif
struct pci_dev_reset_methods {
u16 vendor;
u16 device;
@ -746,6 +810,7 @@ void pci_set_bus_of_node(struct pci_bus *bus);
void pci_release_bus_of_node(struct pci_bus *bus);
int devm_of_pci_bridge_init(struct device *dev, struct pci_host_bridge *bridge);
bool of_pci_supply_present(struct device_node *np);
#else
static inline int
@ -793,6 +858,10 @@ static inline int devm_of_pci_bridge_init(struct device *dev, struct pci_host_br
return 0;
}
static inline bool of_pci_supply_present(struct device_node *np)
{
return false;
}
#endif /* CONFIG_OF */
struct of_changeset;

View File

@ -4,7 +4,7 @@
pcieportdrv-y := portdrv.o rcec.o
obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o
obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o bwctrl.o
obj-y += aspm.o
obj-$(CONFIG_PCIEAER) += aer.o err.o

View File

@ -180,7 +180,8 @@ static int disable_ecrc_checking(struct pci_dev *dev)
}
/**
* pcie_set_ecrc_checking - set/unset PCIe ECRC checking for a device based on global policy
* pcie_set_ecrc_checking - set/unset PCIe ECRC checking for a device based
* on global policy
* @dev: the PCI device
*/
void pcie_set_ecrc_checking(struct pci_dev *dev)
@ -1148,14 +1149,16 @@ static void aer_recover_work_func(struct work_struct *work)
continue;
}
pci_print_aer(pdev, entry.severity, entry.regs);
/*
* Memory for aer_capability_regs(entry.regs) is being allocated from the
* ghes_estatus_pool to protect it from overwriting when multiple sections
* are present in the error status. Thus free the same after processing
* the data.
* Memory for aer_capability_regs(entry.regs) is being
* allocated from the ghes_estatus_pool to protect it from
* overwriting when multiple sections are present in the
* error status. Thus free the same after processing the
* data.
*/
ghes_estatus_pool_region_free((unsigned long)entry.regs,
sizeof(struct aer_capability_regs));
sizeof(struct aer_capability_regs));
if (entry.severity == AER_NONFATAL)
pcie_do_recovery(pdev, pci_channel_io_normal,

Some files were not shown because too many files have changed in this diff Show More