Char/Misc driver patches for 4.16-rc1

Here is the big pull request for char/misc drivers for 4.16-rc1.
 
 There's a lot of stuff in here.  Three new driver subsystems were added
 for various types of hardware busses:
 	- siox
 	- slimbus
 	- soundwire
 as well as a new vboxguest subsystem for the VirtualBox hypervisor
 drivers.
 
 There's also big updates from the FPGA subsystem, lots of Android binder
 fixes, the usual handful of hyper-v updates, and lots of other smaller
 driver updates.
 
 All of these have been in linux-next for a long time, with no reported
 issues.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCWnLuZw8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ynS4QCcCrPmwfD5PJwaF+q2dPfyKaflkQMAn0x6Wd+u
 Gw3Z2scgjETUpwJ9ilnL
 =xcQ0
 -----END PGP SIGNATURE-----

Merge tag 'char-misc-4.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
 "Here is the big pull request for char/misc drivers for 4.16-rc1.

  There's a lot of stuff in here. Three new driver subsystems were added
  for various types of hardware busses:

   - siox
   - slimbus
   - soundwire

  as well as a new vboxguest subsystem for the VirtualBox hypervisor
  drivers.

  There's also big updates from the FPGA subsystem, lots of Android
  binder fixes, the usual handful of hyper-v updates, and lots of other
  smaller driver updates.

  All of these have been in linux-next for a long time, with no reported
  issues"

* tag 'char-misc-4.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (155 commits)
  char: lp: use true or false for boolean values
  android: binder: use VM_ALLOC to get vm area
  android: binder: Use true and false for boolean values
  lkdtm: fix handle_irq_event symbol for INT_HW_IRQ_EN
  EISA: Delete error message for a failed memory allocation in eisa_probe()
  EISA: Whitespace cleanup
  misc: remove AVR32 dependencies
  virt: vbox: Add error mapping for VERR_INVALID_NAME and VERR_NO_MORE_FILES
  soundwire: Fix a signedness bug
  uio_hv_generic: fix new type mismatch warnings
  uio_hv_generic: fix type mismatch warnings
  auxdisplay: img-ascii-lcd: add missing MODULE_DESCRIPTION/AUTHOR/LICENSE
  uio_hv_generic: add rescind support
  uio_hv_generic: check that host supports monitor page
  uio_hv_generic: create send and receive buffers
  uio: document uio_hv_generic regions
  doc: fix documentation about uio_hv_generic
  vmbus: add monitor_id and subchannel_id to sysfs per channel
  vmbus: fix ABI documentation
  uio_hv_generic: use ISR callback method
  ...
This commit is contained in:
Linus Torvalds 2018-02-01 10:31:17 -08:00
commit f6cff79f1d
150 changed files with 14300 additions and 1154 deletions

View File

@ -42,72 +42,93 @@ Contact: K. Y. Srinivasan <kys@microsoft.com>
Description: The 16 bit vendor ID of the device
Users: tools/hv/lsvmbus and user level RDMA libraries
What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/cpu
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN
Date: September. 2017
KernelVersion: 4.14
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Directory for per-channel information
NN is the VMBUS relid associtated with the channel.
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/cpu
Date: September. 2017
KernelVersion: 4.14
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: VCPU (sub)channel is affinitized to
Users: tools/hv/lsvmbus and other debuggig tools
Users: tools/hv/lsvmbus and other debugging tools
What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/cpu
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/cpu
Date: September. 2017
KernelVersion: 4.14
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: VCPU (sub)channel is affinitized to
Users: tools/hv/lsvmbus and other debuggig tools
Users: tools/hv/lsvmbus and other debugging tools
What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/in_mask
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/in_mask
Date: September. 2017
KernelVersion: 4.14
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Inbound channel signaling state
Description: Host to guest channel interrupt mask
Users: Debugging tools
What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/latency
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/latency
Date: September. 2017
KernelVersion: 4.14
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Channel signaling latency
Users: Debugging tools
What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/out_mask
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/out_mask
Date: September. 2017
KernelVersion: 4.14
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Outbound channel signaling state
Description: Guest to host channel interrupt mask
Users: Debugging tools
What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/pending
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/pending
Date: September. 2017
KernelVersion: 4.14
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Channel interrupt pending state
Users: Debugging tools
What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/read_avail
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/read_avail
Date: September. 2017
KernelVersion: 4.14
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Bytes availabble to read
Description: Bytes available to read
Users: Debugging tools
What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/write_avail
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/write_avail
Date: September. 2017
KernelVersion: 4.14
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Bytes availabble to write
Description: Bytes available to write
Users: Debugging tools
What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/events
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/events
Date: September. 2017
KernelVersion: 4.14
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Number of times we have signaled the host
Users: Debugging tools
What: /sys/bus/vmbus/devices/vmbus_*/channels/relid/interrupts
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/interrupts
Date: September. 2017
KernelVersion: 4.14
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Number of times we have taken an interrupt (incoming)
Users: Debugging tools
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/subchannel_id
Date: January. 2018
KernelVersion: 4.16
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Subchannel ID associated with VMBUS channel
Users: Debugging tools and userspace drivers
What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/monitor_id
Date: January. 2018
KernelVersion: 4.16
Contact: Stephen Hemminger <sthemmin@microsoft.com>
Description: Monitor bit associated with channel
Users: Debugging tools and userspace drivers

View File

@ -0,0 +1,87 @@
What: /sys/bus/siox/devices/siox-X/active
KernelVersion: 4.16
Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Description:
On reading represents the current state of the bus. If it
contains a "0" the bus is stopped and connected devices are
expected to not do anything because their watchdog triggered.
When the file contains a "1" the bus is operated and periodically
does a push-pull cycle to write and read data from the
connected devices.
When writing a "0" or "1" the bus moves to the described state.
What: /sys/bus/siox/devices/siox-X/device_add
KernelVersion: 4.16
Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Description:
Write-only file. Write
<type> <inbytes> <outbytes> <statustype>
to add a new device dynamically. <type> is the name that is used to match
to a driver (similar to the platform bus). <inbytes> and <outbytes> define
the length of the input and output shift register in bytes respectively.
<statustype> defines the 4 bit device type that is check to identify connection
problems.
The new device is added to the end of the existing chain.
What: /sys/bus/siox/devices/siox-X/device_remove
KernelVersion: 4.16
Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Description:
Write-only file. A single write removes the last device in the siox chain.
What: /sys/bus/siox/devices/siox-X/poll_interval_ns
KernelVersion: 4.16
Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Description:
Defines the interval between two poll cycles in nano seconds.
Note this is rounded to jiffies on writing. On reading the current value
is returned.
What: /sys/bus/siox/devices/siox-X-Y/connected
KernelVersion: 4.16
Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Description:
Read-only value. "0" means the Yth device on siox bus X isn't "connected" i.e.
communication with it is not ensured. "1" signals a working connection.
What: /sys/bus/siox/devices/siox-X-Y/inbytes
KernelVersion: 4.16
Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Description:
Read-only value reporting the inbytes value provided to siox-X/device_add
What: /sys/bus/siox/devices/siox-X-Y/status_errors
KernelVersion: 4.16
Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Description:
Counts the number of time intervals when the read status byte doesn't yield the
expected value.
What: /sys/bus/siox/devices/siox-X-Y/type
KernelVersion: 4.16
Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Description:
Read-only value reporting the type value provided to siox-X/device_add.
What: /sys/bus/siox/devices/siox-X-Y/watchdog
KernelVersion: 4.16
Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Description:
Read-only value reporting if the watchdog of the siox device is
active. "0" means the watchdog is not active and the device is expected to
be operational. "1" means the watchdog keeps the device in reset.
What: /sys/bus/siox/devices/siox-X-Y/watchdog_errors
KernelVersion: 4.16
Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Description:
Read-only value reporting the number to time intervals when the
watchdog was active.
What: /sys/bus/siox/devices/siox-X-Y/outbytes
KernelVersion: 4.16
Contact: Gavin Schenk <g.schenk@eckelmann.de>, Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Description:
Read-only value reporting the outbytes value provided to siox-X/device_add.

View File

@ -11,7 +11,9 @@ Required properties:
- spi-max-frequency : max spi frequency to use
- pagesize : size of the eeprom page
- size : total eeprom size in bytes
- address-width : number of address bits (one of 8, 16, or 24)
- address-width : number of address bits (one of 8, 9, 16, or 24).
For 9 bits, the MSB of the address is sent as bit 3 of the instruction
byte, before the address byte.
Optional properties:
- spi-cpha : SPI shifted clock phase, as per spi-bus bindings.

View File

@ -6,12 +6,17 @@ Required properties:
- "rockchip,rk3188-efuse" - for RK3188 SoCs.
- "rockchip,rk3228-efuse" - for RK3228 SoCs.
- "rockchip,rk3288-efuse" - for RK3288 SoCs.
- "rockchip,rk3328-efuse" - for RK3328 SoCs.
- "rockchip,rk3368-efuse" - for RK3368 SoCs.
- "rockchip,rk3399-efuse" - for RK3399 SoCs.
- reg: Should contain the registers location and exact eFuse size
- clocks: Should be the clock id of eFuse
- clock-names: Should be "pclk_efuse"
Optional properties:
- rockchip,efuse-size: Should be exact eFuse size in byte, the eFuse
size in property <reg> will be invalid if define this property.
Deprecated properties:
- compatible: "rockchip,rockchip-efuse"
Old efuse compatible value compatible to rk3066a, rk3188 and rk3288

View File

@ -0,0 +1,19 @@
Eckelmann SIOX GPIO bus
Required properties:
- compatible : "eckelmann,siox-gpio"
- din-gpios, dout-gpios, dclk-gpios, dld-gpios: references gpios for the
corresponding bus signals.
Examples:
siox {
compatible = "eckelmann,siox-gpio";
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_siox>;
din-gpios = <&gpio6 11 0>;
dout-gpios = <&gpio6 8 0>;
dclk-gpios = <&gpio6 9 0>;
dld-gpios = <&gpio6 10 0>;
};

View File

@ -0,0 +1,50 @@
SLIM(Serial Low Power Interchip Media Bus) bus
SLIMbus is a 2-wire bus, and is used to communicate with peripheral
components like audio-codec.
Required property for SLIMbus controller node:
- compatible - name of SLIMbus controller
Child nodes:
Every SLIMbus controller node can contain zero or more child nodes
representing slave devices on the bus. Every SLIMbus slave device is
uniquely determined by the enumeration address containing 4 fields:
Manufacturer ID, Product code, Device index, and Instance value for
the device.
If child node is not present and it is instantiated after device
discovery (slave device reporting itself present).
In some cases it may be necessary to describe non-probeable device
details such as non-standard ways of powering up a device. In
such cases, child nodes for those devices will be present as
slaves of the SLIMbus controller, as detailed below.
Required property for SLIMbus child node if it is present:
- reg - Should be ('Device index', 'Instance ID') from SLIMbus
Enumeration Address.
Device Index Uniquely identifies multiple Devices within
a single Component.
Instance ID Is for the cases where multiple Devices of the
same type or Class are attached to the bus.
- compatible -"slimMID,PID". The textual representation of Manufacturer ID,
Product Code, shall be in lower case hexadecimal with leading
zeroes suppressed
SLIMbus example for Qualcomm's slimbus manager component:
slim@28080000 {
compatible = "qcom,apq8064-slim", "qcom,slim";
reg = <0x28080000 0x2000>,
interrupts = <0 33 0>;
clocks = <&lcc SLIMBUS_SRC>, <&lcc AUDIO_SLIMBUS_CLK>;
clock-names = "iface", "core";
#address-cells = <2>;
#size-cell = <0>;
codec: wcd9310@1,0{
compatible = "slim217,60";
reg = <1 0>;
};
};

View File

@ -0,0 +1,39 @@
Qualcomm SLIMbus controller
This controller is used if applications processor driver controls SLIMbus
master component.
Required properties:
- #address-cells - refer to Documentation/devicetree/bindings/slimbus/bus.txt
- #size-cells - refer to Documentation/devicetree/bindings/slimbus/bus.txt
- reg : Offset and length of the register region(s) for the device
- reg-names : Register region name(s) referenced in reg above
Required register resource entries are:
"ctrl": Physical address of controller register blocks
"slew": required for "qcom,apq8064-slim" SOC.
- compatible : should be "qcom,<SOC-NAME>-slim" for SOC specific compatible
followed by "qcom,slim" for fallback.
- interrupts : Interrupt number used by this controller
- clocks : Interface and core clocks used by this SLIMbus controller
- clock-names : Required clock-name entries are:
"iface" : Interface clock for this controller
"core" : Interrupt for controller core's BAM
Example:
slim@28080000 {
compatible = "qcom,apq8064-slim", "qcom,slim";
reg = <0x28080000 0x2000>, <0x80207C 4>;
reg-names = "ctrl", "slew";
interrupts = <0 33 0>;
clocks = <&lcc SLIMBUS_SRC>, <&lcc AUDIO_SLIMBUS_CLK>;
clock-names = "iface", "core";
#address-cells = <2>;
#size-cell = <0>;
wcd9310: audio-codec@1,0{
compatible = "slim217,60";
reg = <1 0>;
};
};

View File

@ -97,6 +97,7 @@ dptechnics DPTechnics
dragino Dragino Technology Co., Limited
ea Embedded Artists AB
ebv EBV Elektronik
eckelmann Eckelmann AG
edt Emerging Display Technologies
eeti eGalax_eMPIA Technology Inc
elan Elan Microelectronic Corp.

View File

@ -47,6 +47,8 @@ available subsections can be seen below.
gpio
misc_devices
dmaengine/index
slimbus
soundwire/index
.. only:: subproject and html

View File

@ -0,0 +1,127 @@
============================
Linux kernel SLIMbus support
============================
Overview
========
What is SLIMbus?
----------------
SLIMbus (Serial Low Power Interchip Media Bus) is a specification developed by
MIPI (Mobile Industry Processor Interface) alliance. The bus uses master/slave
configuration, and is a 2-wire multi-drop implementation (clock, and data).
Currently, SLIMbus is used to interface between application processors of SoCs
(System-on-Chip) and peripheral components (typically codec). SLIMbus uses
Time-Division-Multiplexing to accommodate multiple data channels, and
a control channel.
The control channel is used for various control functions such as bus
management, configuration and status updates. These messages can be unicast (e.g.
reading/writing device specific values), or multicast (e.g. data channel
reconfiguration sequence is a broadcast message announced to all devices)
A data channel is used for data-transfer between 2 SLIMbus devices. Data
channel uses dedicated ports on the device.
Hardware description:
---------------------
SLIMbus specification has different types of device classifications based on
their capabilities.
A manager device is responsible for enumeration, configuration, and dynamic
channel allocation. Every bus has 1 active manager.
A generic device is a device providing application functionality (e.g. codec).
Framer device is responsible for clocking the bus, and transmitting frame-sync
and framing information on the bus.
Each SLIMbus component has an interface device for monitoring physical layer.
Typically each SoC contains SLIMbus component having 1 manager, 1 framer device,
1 generic device (for data channel support), and 1 interface device.
External peripheral SLIMbus component usually has 1 generic device (for
functionality/data channel support), and an associated interface device.
The generic device's registers are mapped as 'value elements' so that they can
be written/read using SLIMbus control channel exchanging control/status type of
information.
In case there are multiple framer devices on the same bus, manager device is
responsible to select the active-framer for clocking the bus.
Per specification, SLIMbus uses "clock gears" to do power management based on
current frequency and bandwidth requirements. There are 10 clock gears and each
gear changes the SLIMbus frequency to be twice its previous gear.
Each device has a 6-byte enumeration-address and the manager assigns every
device with a 1-byte logical address after the devices report presence on the
bus.
Software description:
---------------------
There are 2 types of SLIMbus drivers:
slim_controller represents a 'controller' for SLIMbus. This driver should
implement duties needed by the SoC (manager device, associated
interface device for monitoring the layers and reporting errors, default
framer device).
slim_device represents the 'generic device/component' for SLIMbus, and a
slim_driver should implement driver for that slim_device.
Device notifications to the driver:
-----------------------------------
Since SLIMbus devices have mechanisms for reporting their presence, the
framework allows drivers to bind when corresponding devices report their
presence on the bus.
However, it is possible that the driver needs to be probed
first so that it can enable corresponding SLIMbus device (e.g. power it up and/or
take it out of reset). To support that behavior, the framework allows drivers
to probe first as well (e.g. using standard DeviceTree compatibility field).
This creates the necessity for the driver to know when the device is functional
(i.e. reported present). device_up callback is used for that reason when the
device reports present and is assigned a logical address by the controller.
Similarly, SLIMbus devices 'report absent' when they go down. A 'device_down'
callback notifies the driver when the device reports absent and its logical
address assignment is invalidated by the controller.
Another notification "boot_device" is used to notify the slim_driver when
controller resets the bus. This notification allows the driver to take necessary
steps to boot the device so that it's functional after the bus has been reset.
Driver and Controller APIs:
--------------------------
.. kernel-doc:: include/linux/slimbus.h
:internal:
.. kernel-doc:: drivers/slimbus/slimbus.h
:internal:
.. kernel-doc:: drivers/slimbus/core.c
:export:
Clock-pause:
------------
SLIMbus mandates that a reconfiguration sequence (known as clock-pause) be
broadcast to all active devices on the bus before the bus can enter low-power
mode. Controller uses this sequence when it decides to enter low-power mode so
that corresponding clocks and/or power-rails can be turned off to save power.
Clock-pause is exited by waking up framer device (if controller driver initiates
exiting low power mode), or by toggling the data line (if a slave device wants
to initiate it).
Clock-pause APIs:
~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/slimbus/sched.c
:export:
Messaging:
----------
The framework supports regmap and read/write apis to exchange control-information
with a SLIMbus device. APIs can be synchronous or asynchronous.
The header file <linux/slimbus.h> has more documentation about messaging APIs.
Messaging APIs:
~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/slimbus/messaging.c
:export:

View File

@ -0,0 +1,15 @@
=======================
SoundWire Documentation
=======================
.. toctree::
:maxdepth: 1
summary
.. only:: subproject
Indices
=======
* :ref:`genindex`

View File

@ -0,0 +1,207 @@
===========================
SoundWire Subsystem Summary
===========================
SoundWire is a new interface ratified in 2015 by the MIPI Alliance.
SoundWire is used for transporting data typically related to audio
functions. SoundWire interface is optimized to integrate audio devices in
mobile or mobile inspired systems.
SoundWire is a 2-pin multi-drop interface with data and clock line. It
facilitates development of low cost, efficient, high performance systems.
Broad level key features of SoundWire interface include:
(1) Transporting all of payload data channels, control information, and setup
commands over a single two-pin interface.
(2) Lower clock frequency, and hence lower power consumption, by use of DDR
(Dual Data Rate) data transmission.
(3) Clock scaling and optional multiple data lanes to give wide flexibility
in data rate to match system requirements.
(4) Device status monitoring, including interrupt-style alerts to the Master.
The SoundWire protocol supports up to eleven Slave interfaces. All the
interfaces share the common Bus containing data and clock line. Each of the
Slaves can support up to 14 Data Ports. 13 Data Ports are dedicated to audio
transport. Data Port0 is dedicated to transport of Bulk control information,
each of the audio Data Ports (1..14) can support up to 8 Channels in
transmit or receiving mode (typically fixed direction but configurable
direction is enabled by the specification). Bandwidth restrictions to
~19.2..24.576Mbits/s don't however allow for 11*13*8 channels to be
transmitted simultaneously.
Below figure shows an example of connectivity between a SoundWire Master and
two Slave devices. ::
+---------------+ +---------------+
| | Clock Signal | |
| Master |-------+-------------------------------| Slave |
| Interface | | Data Signal | Interface 1 |
| |-------|-------+-----------------------| |
+---------------+ | | +---------------+
| |
| |
| |
+--+-------+--+
| |
| Slave |
| Interface 2 |
| |
+-------------+
Terminology
===========
The MIPI SoundWire specification uses the term 'device' to refer to a Master
or Slave interface, which of course can be confusing. In this summary and
code we use the term interface only to refer to the hardware. We follow the
Linux device model by mapping each Slave interface connected on the bus as a
device managed by a specific driver. The Linux SoundWire subsystem provides
a framework to implement a SoundWire Slave driver with an API allowing
3rd-party vendors to enable implementation-defined functionality while
common setup/configuration tasks are handled by the bus.
Bus:
Implements SoundWire Linux Bus which handles the SoundWire protocol.
Programs all the MIPI-defined Slave registers. Represents a SoundWire
Master. Multiple instances of Bus may be present in a system.
Slave:
Registers as SoundWire Slave device (Linux Device). Multiple Slave devices
can register to a Bus instance.
Slave driver:
Driver controlling the Slave device. MIPI-specified registers are controlled
directly by the Bus (and transmitted through the Master driver/interface).
Any implementation-defined Slave register is controlled by Slave driver. In
practice, it is expected that the Slave driver relies on regmap and does not
request direct register access.
Programming interfaces (SoundWire Master interface Driver)
==========================================================
SoundWire Bus supports programming interfaces for the SoundWire Master
implementation and SoundWire Slave devices. All the code uses the "sdw"
prefix commonly used by SoC designers and 3rd party vendors.
Each of the SoundWire Master interfaces needs to be registered to the Bus.
Bus implements API to read standard Master MIPI properties and also provides
callback in Master ops for Master driver to implement its own functions that
provides capabilities information. DT support is not implemented at this
time but should be trivial to add since capabilities are enabled with the
``device_property_`` API.
The Master interface along with the Master interface capabilities are
registered based on board file, DT or ACPI.
Following is the Bus API to register the SoundWire Bus:
.. code-block:: c
int sdw_add_bus_master(struct sdw_bus *bus)
{
if (!bus->dev)
return -ENODEV;
mutex_init(&bus->lock);
INIT_LIST_HEAD(&bus->slaves);
/* Check ACPI for Slave devices */
sdw_acpi_find_slaves(bus);
/* Check DT for Slave devices */
sdw_of_find_slaves(bus);
return 0;
}
This will initialize sdw_bus object for Master device. "sdw_master_ops" and
"sdw_master_port_ops" callback functions are provided to the Bus.
"sdw_master_ops" is used by Bus to control the Bus in the hardware specific
way. It includes Bus control functions such as sending the SoundWire
read/write messages on Bus, setting up clock frequency & Stream
Synchronization Point (SSP). The "sdw_master_ops" structure abstracts the
hardware details of the Master from the Bus.
"sdw_master_port_ops" is used by Bus to setup the Port parameters of the
Master interface Port. Master interface Port register map is not defined by
MIPI specification, so Bus calls the "sdw_master_port_ops" callback
function to do Port operations like "Port Prepare", "Port Transport params
set", "Port enable and disable". The implementation of the Master driver can
then perform hardware-specific configurations.
Programming interfaces (SoundWire Slave Driver)
===============================================
The MIPI specification requires each Slave interface to expose a unique
48-bit identifier, stored in 6 read-only dev_id registers. This dev_id
identifier contains vendor and part information, as well as a field enabling
to differentiate between identical components. An additional class field is
currently unused. Slave driver is written for a specific vendor and part
identifier, Bus enumerates the Slave device based on these two ids.
Slave device and driver match is done based on these two ids . Probe
of the Slave driver is called by Bus on successful match between device and
driver id. A parent/child relationship is enforced between Master and Slave
devices (the logical representation is aligned with the physical
connectivity).
The information on Master/Slave dependencies is stored in platform data,
board-file, ACPI or DT. The MIPI Software specification defines additional
link_id parameters for controllers that have multiple Master interfaces. The
dev_id registers are only unique in the scope of a link, and the link_id
unique in the scope of a controller. Both dev_id and link_id are not
necessarily unique at the system level but the parent/child information is
used to avoid ambiguity.
.. code-block:: c
static const struct sdw_device_id slave_id[] = {
SDW_SLAVE_ENTRY(0x025d, 0x700, 0),
{},
};
MODULE_DEVICE_TABLE(sdw, slave_id);
static struct sdw_driver slave_sdw_driver = {
.driver = {
.name = "slave_xxx",
.pm = &slave_runtime_pm,
},
.probe = slave_sdw_probe,
.remove = slave_sdw_remove,
.ops = &slave_slave_ops,
.id_table = slave_id,
};
For capabilities, Bus implements API to read standard Slave MIPI properties
and also provides callback in Slave ops for Slave driver to implement own
function that provides capabilities information. Bus needs to know a set of
Slave capabilities to program Slave registers and to control the Bus
reconfigurations.
Future enhancements to be done
==============================
(1) Bulk Register Access (BRA) transfers.
(2) Multiple data lane support.
Links
=====
SoundWire MIPI specification 1.1 is available at:
https://members.mipi.org/wg/All-Members/document/70290
SoundWire MIPI DisCo (Discovery and Configuration) specification is
available at:
https://www.mipi.org/specifications/mipi-disco-soundwire
(publicly accessible with registration or directly accessible to MIPI
members)
MIPI Alliance Manufacturer ID Page: mid.mipi.org

View File

@ -667,27 +667,28 @@ Making the driver recognize the device
Since the driver does not declare any device GUID's, it will not get
loaded automatically and will not automatically bind to any devices, you
must load it and allocate id to the driver yourself. For example, to use
the network device GUID::
the network device class GUID::
modprobe uio_hv_generic
echo "f8615163-df3e-46c5-913f-f2d2f965ed0e" > /sys/bus/vmbus/drivers/uio_hv_generic/new_id
If there already is a hardware specific kernel driver for the device,
the generic driver still won't bind to it, in this case if you want to
use the generic driver (why would you?) you'll have to manually unbind
the hardware specific driver and bind the generic driver, like this::
use the generic driver for a userspace library you'll have to manually unbind
the hardware specific driver and bind the generic driver, using the device specific GUID
like this::
echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/hv_netvsc/unbind
echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/uio_hv_generic/bind
echo -n ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/hv_netvsc/unbind
echo -n ed963694-e847-4b2a-85af-bc9cfc11d6f3 > /sys/bus/vmbus/drivers/uio_hv_generic/bind
You can verify that the device has been bound to the driver by looking
for it in sysfs, for example like the following::
ls -l /sys/bus/vmbus/devices/vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver
ls -l /sys/bus/vmbus/devices/ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver
Which if successful should print::
.../vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver -> ../../../bus/vmbus/drivers/uio_hv_generic
.../ed963694-e847-4b2a-85af-bc9cfc11d6f3/driver -> ../../../bus/vmbus/drivers/uio_hv_generic
Things to know about uio_hv_generic
-----------------------------------
@ -697,6 +698,17 @@ prevents the device from generating further interrupts until the bit is
cleared. The userspace driver should clear this bit before blocking and
waiting for more interrupts.
When host rescinds a device, the interrupt file descriptor is marked down
and any reads of the interrupt file descriptor will return -EIO. Similar
to a closed socket or disconnected serial device.
The vmbus device regions are mapped into uio device resources:
0) Channel ring buffers: guest to host and host to guest
1) Guest to host interrupt signalling pages
2) Guest to host monitor page
3) Network receive buffer region
4) Network send buffer region
Further information
===================

View File

@ -11,61 +11,65 @@ hidden away in a low level driver which registers a set of ops with the core.
The FPGA image data itself is very manufacturer specific, but for our purposes
it's just binary data. The FPGA manager core won't parse it.
The FPGA image to be programmed can be in a scatter gather list, a single
contiguous buffer, or a firmware file. Because allocating contiguous kernel
memory for the buffer should be avoided, users are encouraged to use a scatter
gather list instead if possible.
The particulars for programming the image are presented in a structure (struct
fpga_image_info). This struct contains parameters such as pointers to the
FPGA image as well as image-specific particulars such as whether the image was
built for full or partial reconfiguration.
API Functions:
==============
To program the FPGA from a file or from a buffer:
-------------------------------------------------
To program the FPGA:
--------------------
int fpga_mgr_buf_load(struct fpga_manager *mgr,
struct fpga_image_info *info,
const char *buf, size_t count);
int fpga_mgr_load(struct fpga_manager *mgr,
struct fpga_image_info *info);
Load the FPGA from an image which exists as a contiguous buffer in
memory. Allocating contiguous kernel memory for the buffer should be avoided,
users are encouraged to use the _sg interface instead of this.
int fpga_mgr_buf_load_sg(struct fpga_manager *mgr,
struct fpga_image_info *info,
struct sg_table *sgt);
Load the FPGA from an image from non-contiguous in memory. Callers can
construct a sg_table using alloc_page backed memory.
int fpga_mgr_firmware_load(struct fpga_manager *mgr,
struct fpga_image_info *info,
const char *image_name);
Load the FPGA from an image which exists as a file. The image file must be on
the firmware search path (see the firmware class documentation). If successful,
Load the FPGA from an image which is indicated in the info. If successful,
the FPGA ends up in operating mode. Return 0 on success or a negative error
code.
A FPGA design contained in a FPGA image file will likely have particulars that
affect how the image is programmed to the FPGA. These are contained in struct
fpga_image_info. Currently the only such particular is a single flag bit
indicating whether the image is for full or partial reconfiguration.
To allocate or free a struct fpga_image_info:
---------------------------------------------
struct fpga_image_info *fpga_image_info_alloc(struct device *dev);
void fpga_image_info_free(struct fpga_image_info *info);
To get/put a reference to a FPGA manager:
-----------------------------------------
struct fpga_manager *of_fpga_mgr_get(struct device_node *node);
struct fpga_manager *fpga_mgr_get(struct device *dev);
Given a DT node or device, get an exclusive reference to a FPGA manager.
void fpga_mgr_put(struct fpga_manager *mgr);
Release the reference.
Given a DT node or device, get a reference to a FPGA manager. This pointer
can be saved until you are ready to program the FPGA. fpga_mgr_put releases
the reference.
To get exclusive control of a FPGA manager:
-------------------------------------------
int fpga_mgr_lock(struct fpga_manager *mgr);
void fpga_mgr_unlock(struct fpga_manager *mgr);
The user should call fpga_mgr_lock and verify that it returns 0 before
attempting to program the FPGA. Likewise, the user should call
fpga_mgr_unlock when done programming the FPGA.
To register or unregister the low level FPGA-specific driver:
-------------------------------------------------------------
int fpga_mgr_register(struct device *dev, const char *name,
const struct fpga_manager_ops *mops,
void *priv);
const struct fpga_manager_ops *mops,
void *priv);
void fpga_mgr_unregister(struct device *dev);
@ -75,62 +79,58 @@ device."
How to write an image buffer to a supported FPGA
================================================
/* Include to get the API */
#include <linux/fpga/fpga-mgr.h>
/* device node that specifies the FPGA manager to use */
struct device_node *mgr_node = ...
/* FPGA image is in this buffer. count is size of the buffer. */
char *buf = ...
int count = ...
/* struct with information about the FPGA image to program. */
struct fpga_image_info info;
/* flags indicates whether to do full or partial reconfiguration */
info.flags = 0;
struct fpga_manager *mgr;
struct fpga_image_info *info;
int ret;
/*
* Get a reference to FPGA manager. The manager is not locked, so you can
* hold onto this reference without it preventing programming.
*
* This example uses the device node of the manager. Alternatively, use
* fpga_mgr_get(dev) instead if you have the device.
*/
mgr = of_fpga_mgr_get(mgr_node);
/* struct with information about the FPGA image to program. */
info = fpga_image_info_alloc(dev);
/* flags indicates whether to do full or partial reconfiguration */
info->flags = FPGA_MGR_PARTIAL_RECONFIG;
/*
* At this point, indicate where the image is. This is pseudo-code; you're
* going to use one of these three.
*/
if (image is in a scatter gather table) {
info->sgt = [your scatter gather table]
} else if (image is in a buffer) {
info->buf = [your image buffer]
info->count = [image buffer size]
} else if (image is in a firmware file) {
info->firmware_name = devm_kstrdup(dev, firmware_name, GFP_KERNEL);
}
/* Get exclusive control of FPGA manager */
struct fpga_manager *mgr = of_fpga_mgr_get(mgr_node);
ret = fpga_mgr_lock(mgr);
/* Load the buffer to the FPGA */
ret = fpga_mgr_buf_load(mgr, &info, buf, count);
/* Release the FPGA manager */
fpga_mgr_unlock(mgr);
fpga_mgr_put(mgr);
How to write an image file to a supported FPGA
==============================================
/* Include to get the API */
#include <linux/fpga/fpga-mgr.h>
/* device node that specifies the FPGA manager to use */
struct device_node *mgr_node = ...
/* FPGA image is in this file which is in the firmware search path */
const char *path = "fpga-image-9.rbf"
/* struct with information about the FPGA image to program. */
struct fpga_image_info info;
/* flags indicates whether to do full or partial reconfiguration */
info.flags = 0;
int ret;
/* Get exclusive control of FPGA manager */
struct fpga_manager *mgr = of_fpga_mgr_get(mgr_node);
/* Get the firmware image (path) and load it to the FPGA */
ret = fpga_mgr_firmware_load(mgr, &info, path);
/* Release the FPGA manager */
fpga_mgr_put(mgr);
/* Deallocate the image info if you're done with it */
fpga_image_info_free(info);
How to support a new FPGA device
================================

View File

@ -0,0 +1,95 @@
FPGA Regions
Alan Tull 2017
CONTENTS
- Introduction
- The FPGA region API
- Usage example
Introduction
============
This document is meant to be an brief overview of the FPGA region API usage. A
more conceptual look at regions can be found in [1].
For the purposes of this API document, let's just say that a region associates
an FPGA Manager and a bridge (or bridges) with a reprogrammable region of an
FPGA or the whole FPGA. The API provides a way to register a region and to
program a region.
Currently the only layer above fpga-region.c in the kernel is the Device Tree
support (of-fpga-region.c) described in [1]. The DT support layer uses regions
to program the FPGA and then DT to handle enumeration. The common region code
is intended to be used by other schemes that have other ways of accomplishing
enumeration after programming.
An fpga-region can be set up to know the following things:
* which FPGA manager to use to do the programming
* which bridges to disable before programming and enable afterwards.
Additional info needed to program the FPGA image is passed in the struct
fpga_image_info [2] including:
* pointers to the image as either a scatter-gather buffer, a contiguous
buffer, or the name of firmware file
* flags indicating specifics such as whether the image if for partial
reconfiguration.
===================
The FPGA region API
===================
To register or unregister a region:
-----------------------------------
int fpga_region_register(struct device *dev,
struct fpga_region *region);
int fpga_region_unregister(struct fpga_region *region);
An example of usage can be seen in the probe function of [3]
To program an FPGA:
-------------------
int fpga_region_program_fpga(struct fpga_region *region);
This function operates on info passed in the fpga_image_info
(region->info).
This function will attempt to:
* lock the region's mutex
* lock the region's FPGA manager
* build a list of FPGA bridges if a method has been specified to do so
* disable the bridges
* program the FPGA
* re-enable the bridges
* release the locks
=============
Usage example
=============
First, allocate the info struct:
info = fpga_image_info_alloc(dev);
if (!info)
return -ENOMEM;
Set flags as needed, i.e.
info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
Point to your FPGA image, such as:
info->sgt = &sgt;
Add info to region and do the programming:
region->info = info;
ret = fpga_region_program_fpga(region);
Then enumerate whatever hardware has appeared in the FPGA.
--
[1] ../devicetree/bindings/fpga/fpga-region.txt
[2] ./fpga-mgr.txt
[3] ../../drivers/fpga/of-fpga-region.c

View File

@ -0,0 +1,23 @@
Linux kernel FPGA support
Alan Tull 2017
The main point of this project has been to separate the out the upper layers
that know when to reprogram a FPGA from the lower layers that know how to
reprogram a specific FPGA device. The intention is to make this manufacturer
agnostic, understanding that of course the FPGA images are very device specific
themselves.
The framework in the kernel includes:
* low level FPGA manager drivers that know how to program a specific device
* the fpga-mgr framework they are registered with
* low level FPGA bridge drivers for hard/soft bridges which are intended to
be disable during FPGA programming
* the fpga-bridge framework they are registered with
* the fpga-region framework which associates and controls managers and bridges
as reconfigurable regions
* the of-fpga-region support for reprogramming FPGAs when device tree overlays
are applied.
I would encourage you the user to add code that creates FPGA regions rather
that trying to control managers and bridges separately.

View File

@ -3421,8 +3421,8 @@ M: Arnd Bergmann <arnd@arndb.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
S: Supported
F: drivers/char/*
F: drivers/misc/*
F: drivers/char/
F: drivers/misc/
F: include/linux/miscdevice.h
CHECKPATCH
@ -12526,6 +12526,13 @@ F: lib/siphash.c
F: lib/test_siphash.c
F: include/linux/siphash.h
SIOX
M: Gavin Schenk <g.schenk@eckelmann.de>
M: Uwe Kleine-König <kernel@pengutronix.de>
S: Supported
F: drivers/siox/*
F: include/trace/events/siox.h
SIS 190 ETHERNET DRIVER
M: Francois Romieu <romieu@fr.zoreil.com>
L: netdev@vger.kernel.org
@ -12577,6 +12584,14 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
F: include/linux/srcu.h
F: kernel/rcu/srcu.c
SERIAL LOW-POWER INTER-CHIP MEDIA BUS (SLIMbus)
M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Maintained
F: drivers/slimbus/
F: Documentation/devicetree/bindings/slimbus/
F: include/linux/slimbus.h
SMACK SECURITY MODULE
M: Casey Schaufler <casey@schaufler-ca.com>
L: linux-security-module@vger.kernel.org
@ -12802,6 +12817,16 @@ F: Documentation/sound/alsa/soc/
F: sound/soc/
F: include/sound/soc*
SOUNDWIRE SUBSYSTEM
M: Vinod Koul <vinod.koul@intel.com>
M: Sanyog Kale <sanyog.r.kale@intel.com>
R: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Supported
F: Documentation/driver-api/soundwire/
F: drivers/soundwire/
F: include/linux/soundwire/
SP2 MEDIA DRIVER
M: Olli Salonen <olli.salonen@iki.fi>
L: linux-media@vger.kernel.org
@ -14672,6 +14697,15 @@ S: Maintained
F: drivers/virtio/virtio_input.c
F: include/uapi/linux/virtio_input.h
VIRTUAL BOX GUEST DEVICE DRIVER
M: Hans de Goede <hdegoede@redhat.com>
M: Arnd Bergmann <arnd@arndb.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
S: Maintained
F: include/linux/vbox_utils.h
F: include/uapi/linux/vbox*.h
F: drivers/virt/vboxguest/
VIRTUAL SERIO DEVICE DRIVER
M: Stephen Chandler Paul <thatslyude@gmail.com>
S: Maintained

View File

@ -239,17 +239,24 @@ void hyperv_report_panic(struct pt_regs *regs, long err)
}
EXPORT_SYMBOL_GPL(hyperv_report_panic);
bool hv_is_hypercall_page_setup(void)
bool hv_is_hyperv_initialized(void)
{
union hv_x64_msr_hypercall_contents hypercall_msr;
/* Check if the hypercall page is setup */
/*
* Ensure that we're really on Hyper-V, and not a KVM or Xen
* emulation of Hyper-V
*/
if (x86_hyper_type != X86_HYPER_MS_HYPERV)
return false;
/*
* Verify that earlier initialization succeeded by checking
* that the hypercall page is setup
*/
hypercall_msr.as_uint64 = 0;
rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
if (!hypercall_msr.enable)
return false;
return true;
return hypercall_msr.enable;
}
EXPORT_SYMBOL_GPL(hv_is_hypercall_page_setup);
EXPORT_SYMBOL_GPL(hv_is_hyperv_initialized);

View File

@ -314,11 +314,11 @@ void hyperv_init(void);
void hyperv_setup_mmu_ops(void);
void hyper_alloc_mmu(void);
void hyperv_report_panic(struct pt_regs *regs, long err);
bool hv_is_hypercall_page_setup(void);
bool hv_is_hyperv_initialized(void);
void hyperv_cleanup(void);
#else /* CONFIG_HYPERV */
static inline void hyperv_init(void) {}
static inline bool hv_is_hypercall_page_setup(void) { return false; }
static inline bool hv_is_hyperv_initialized(void) { return false; }
static inline void hyperv_cleanup(void) {}
static inline void hyperv_setup_mmu_ops(void) {}
#endif /* CONFIG_HYPERV */

View File

@ -153,6 +153,8 @@ source "drivers/remoteproc/Kconfig"
source "drivers/rpmsg/Kconfig"
source "drivers/soundwire/Kconfig"
source "drivers/soc/Kconfig"
source "drivers/devfreq/Kconfig"
@ -213,4 +215,8 @@ source "drivers/opp/Kconfig"
source "drivers/visorbus/Kconfig"
source "drivers/siox/Kconfig"
source "drivers/slimbus/Kconfig"
endmenu

View File

@ -87,6 +87,7 @@ obj-$(CONFIG_MTD) += mtd/
obj-$(CONFIG_SPI) += spi/
obj-$(CONFIG_SPMI) += spmi/
obj-$(CONFIG_HSI) += hsi/
obj-$(CONFIG_SLIMBUS) += slimbus/
obj-y += net/
obj-$(CONFIG_ATM) += atm/
obj-$(CONFIG_FUSION) += message/
@ -157,6 +158,7 @@ obj-$(CONFIG_MAILBOX) += mailbox/
obj-$(CONFIG_HWSPINLOCK) += hwspinlock/
obj-$(CONFIG_REMOTEPROC) += remoteproc/
obj-$(CONFIG_RPMSG) += rpmsg/
obj-$(CONFIG_SOUNDWIRE) += soundwire/
# Virtualization drivers
obj-$(CONFIG_VIRT_DRIVERS) += virt/
@ -185,3 +187,4 @@ obj-$(CONFIG_FSI) += fsi/
obj-$(CONFIG_TEE) += tee/
obj-$(CONFIG_MULTIPLEXER) += mux/
obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/
obj-$(CONFIG_SIOX) += siox/

View File

@ -141,7 +141,7 @@ enum {
};
static uint32_t binder_debug_mask = BINDER_DEBUG_USER_ERROR |
BINDER_DEBUG_FAILED_TRANSACTION | BINDER_DEBUG_DEAD_TRANSACTION;
module_param_named(debug_mask, binder_debug_mask, uint, S_IWUSR | S_IRUGO);
module_param_named(debug_mask, binder_debug_mask, uint, 0644);
static char *binder_devices_param = CONFIG_ANDROID_BINDER_DEVICES;
module_param_named(devices, binder_devices_param, charp, 0444);
@ -160,7 +160,7 @@ static int binder_set_stop_on_user_error(const char *val,
return ret;
}
module_param_call(stop_on_user_error, binder_set_stop_on_user_error,
param_get_int, &binder_stop_on_user_error, S_IWUSR | S_IRUGO);
param_get_int, &binder_stop_on_user_error, 0644);
#define binder_debug(mask, x...) \
do { \
@ -249,7 +249,7 @@ static struct binder_transaction_log_entry *binder_transaction_log_add(
unsigned int cur = atomic_inc_return(&log->cur);
if (cur >= ARRAY_SIZE(log->entry))
log->full = 1;
log->full = true;
e = &log->entry[cur % ARRAY_SIZE(log->entry)];
WRITE_ONCE(e->debug_id_done, 0);
/*
@ -493,8 +493,6 @@ enum binder_deferred_state {
* (protected by @inner_lock)
* @todo: list of work for this process
* (protected by @inner_lock)
* @wait: wait queue head to wait for proc work
* (invariant after initialized)
* @stats: per-process binder statistics
* (atomics, no lock needed)
* @delivered_death: list of delivered death notification
@ -537,7 +535,6 @@ struct binder_proc {
bool is_dead;
struct list_head todo;
wait_queue_head_t wait;
struct binder_stats stats;
struct list_head delivered_death;
int max_threads;
@ -579,6 +576,8 @@ enum {
* (protected by @proc->inner_lock)
* @todo: list of work to do for this thread
* (protected by @proc->inner_lock)
* @process_todo: whether work in @todo should be processed
* (protected by @proc->inner_lock)
* @return_error: transaction errors reported by this thread
* (only accessed by this thread)
* @reply_error: transaction errors reported by target thread
@ -604,6 +603,7 @@ struct binder_thread {
bool looper_need_return; /* can be written by other thread */
struct binder_transaction *transaction_stack;
struct list_head todo;
bool process_todo;
struct binder_error return_error;
struct binder_error reply_error;
wait_queue_head_t wait;
@ -789,6 +789,16 @@ static bool binder_worklist_empty(struct binder_proc *proc,
return ret;
}
/**
* binder_enqueue_work_ilocked() - Add an item to the work list
* @work: struct binder_work to add to list
* @target_list: list to add work to
*
* Adds the work to the specified list. Asserts that work
* is not already on a list.
*
* Requires the proc->inner_lock to be held.
*/
static void
binder_enqueue_work_ilocked(struct binder_work *work,
struct list_head *target_list)
@ -799,22 +809,56 @@ binder_enqueue_work_ilocked(struct binder_work *work,
}
/**
* binder_enqueue_work() - Add an item to the work list
* @proc: binder_proc associated with list
* binder_enqueue_deferred_thread_work_ilocked() - Add deferred thread work
* @thread: thread to queue work to
* @work: struct binder_work to add to list
* @target_list: list to add work to
*
* Adds the work to the specified list. Asserts that work
* is not already on a list.
* Adds the work to the todo list of the thread. Doesn't set the process_todo
* flag, which means that (if it wasn't already set) the thread will go to
* sleep without handling this work when it calls read.
*
* Requires the proc->inner_lock to be held.
*/
static void
binder_enqueue_work(struct binder_proc *proc,
struct binder_work *work,
struct list_head *target_list)
binder_enqueue_deferred_thread_work_ilocked(struct binder_thread *thread,
struct binder_work *work)
{
binder_inner_proc_lock(proc);
binder_enqueue_work_ilocked(work, target_list);
binder_inner_proc_unlock(proc);
binder_enqueue_work_ilocked(work, &thread->todo);
}
/**
* binder_enqueue_thread_work_ilocked() - Add an item to the thread work list
* @thread: thread to queue work to
* @work: struct binder_work to add to list
*
* Adds the work to the todo list of the thread, and enables processing
* of the todo queue.
*
* Requires the proc->inner_lock to be held.
*/
static void
binder_enqueue_thread_work_ilocked(struct binder_thread *thread,
struct binder_work *work)
{
binder_enqueue_work_ilocked(work, &thread->todo);
thread->process_todo = true;
}
/**
* binder_enqueue_thread_work() - Add an item to the thread work list
* @thread: thread to queue work to
* @work: struct binder_work to add to list
*
* Adds the work to the todo list of the thread, and enables processing
* of the todo queue.
*/
static void
binder_enqueue_thread_work(struct binder_thread *thread,
struct binder_work *work)
{
binder_inner_proc_lock(thread->proc);
binder_enqueue_thread_work_ilocked(thread, work);
binder_inner_proc_unlock(thread->proc);
}
static void
@ -940,7 +984,7 @@ static long task_close_fd(struct binder_proc *proc, unsigned int fd)
static bool binder_has_work_ilocked(struct binder_thread *thread,
bool do_proc_work)
{
return !binder_worklist_empty_ilocked(&thread->todo) ||
return thread->process_todo ||
thread->looper_need_return ||
(do_proc_work &&
!binder_worklist_empty_ilocked(&thread->proc->todo));
@ -1228,6 +1272,17 @@ static int binder_inc_node_nilocked(struct binder_node *node, int strong,
node->local_strong_refs++;
if (!node->has_strong_ref && target_list) {
binder_dequeue_work_ilocked(&node->work);
/*
* Note: this function is the only place where we queue
* directly to a thread->todo without using the
* corresponding binder_enqueue_thread_work() helper
* functions; in this case it's ok to not set the
* process_todo flag, since we know this node work will
* always be followed by other work that starts queue
* processing: in case of synchronous transactions, a
* BR_REPLY or BR_ERROR; in case of oneway
* transactions, a BR_TRANSACTION_COMPLETE.
*/
binder_enqueue_work_ilocked(&node->work, target_list);
}
} else {
@ -1239,6 +1294,9 @@ static int binder_inc_node_nilocked(struct binder_node *node, int strong,
node->debug_id);
return -EINVAL;
}
/*
* See comment above
*/
binder_enqueue_work_ilocked(&node->work, target_list);
}
}
@ -1928,9 +1986,9 @@ static void binder_send_failed_reply(struct binder_transaction *t,
binder_pop_transaction_ilocked(target_thread, t);
if (target_thread->reply_error.cmd == BR_OK) {
target_thread->reply_error.cmd = error_code;
binder_enqueue_work_ilocked(
&target_thread->reply_error.work,
&target_thread->todo);
binder_enqueue_thread_work_ilocked(
target_thread,
&target_thread->reply_error.work);
wake_up_interruptible(&target_thread->wait);
} else {
WARN(1, "Unexpected reply error: %u\n",
@ -2569,20 +2627,18 @@ static bool binder_proc_transaction(struct binder_transaction *t,
struct binder_proc *proc,
struct binder_thread *thread)
{
struct list_head *target_list = NULL;
struct binder_node *node = t->buffer->target_node;
bool oneway = !!(t->flags & TF_ONE_WAY);
bool wakeup = true;
bool pending_async = false;
BUG_ON(!node);
binder_node_lock(node);
if (oneway) {
BUG_ON(thread);
if (node->has_async_transaction) {
target_list = &node->async_todo;
wakeup = false;
pending_async = true;
} else {
node->has_async_transaction = 1;
node->has_async_transaction = true;
}
}
@ -2594,19 +2650,17 @@ static bool binder_proc_transaction(struct binder_transaction *t,
return false;
}
if (!thread && !target_list)
if (!thread && !pending_async)
thread = binder_select_thread_ilocked(proc);
if (thread)
target_list = &thread->todo;
else if (!target_list)
target_list = &proc->todo;
binder_enqueue_thread_work_ilocked(thread, &t->work);
else if (!pending_async)
binder_enqueue_work_ilocked(&t->work, &proc->todo);
else
BUG_ON(target_list != &node->async_todo);
binder_enqueue_work_ilocked(&t->work, &node->async_todo);
binder_enqueue_work_ilocked(&t->work, target_list);
if (wakeup)
if (!pending_async)
binder_wakeup_thread_ilocked(proc, thread, !oneway /* sync */);
binder_inner_proc_unlock(proc);
@ -3101,10 +3155,10 @@ static void binder_transaction(struct binder_proc *proc,
}
}
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
binder_enqueue_work(proc, tcomplete, &thread->todo);
t->work.type = BINDER_WORK_TRANSACTION;
if (reply) {
binder_enqueue_thread_work(thread, tcomplete);
binder_inner_proc_lock(target_proc);
if (target_thread->is_dead) {
binder_inner_proc_unlock(target_proc);
@ -3112,13 +3166,21 @@ static void binder_transaction(struct binder_proc *proc,
}
BUG_ON(t->buffer->async_transaction != 0);
binder_pop_transaction_ilocked(target_thread, in_reply_to);
binder_enqueue_work_ilocked(&t->work, &target_thread->todo);
binder_enqueue_thread_work_ilocked(target_thread, &t->work);
binder_inner_proc_unlock(target_proc);
wake_up_interruptible_sync(&target_thread->wait);
binder_free_transaction(in_reply_to);
} else if (!(t->flags & TF_ONE_WAY)) {
BUG_ON(t->buffer->async_transaction != 0);
binder_inner_proc_lock(proc);
/*
* Defer the TRANSACTION_COMPLETE, so we don't return to
* userspace immediately; this allows the target process to
* immediately start processing this transaction, reducing
* latency. We will then return the TRANSACTION_COMPLETE when
* the target replies (or there is an error).
*/
binder_enqueue_deferred_thread_work_ilocked(thread, tcomplete);
t->need_reply = 1;
t->from_parent = thread->transaction_stack;
thread->transaction_stack = t;
@ -3132,6 +3194,7 @@ static void binder_transaction(struct binder_proc *proc,
} else {
BUG_ON(target_node == NULL);
BUG_ON(t->buffer->async_transaction != 1);
binder_enqueue_thread_work(thread, tcomplete);
if (!binder_proc_transaction(t, target_proc, NULL))
goto err_dead_proc_or_thread;
}
@ -3210,15 +3273,11 @@ static void binder_transaction(struct binder_proc *proc,
BUG_ON(thread->return_error.cmd != BR_OK);
if (in_reply_to) {
thread->return_error.cmd = BR_TRANSACTION_COMPLETE;
binder_enqueue_work(thread->proc,
&thread->return_error.work,
&thread->todo);
binder_enqueue_thread_work(thread, &thread->return_error.work);
binder_send_failed_reply(in_reply_to, return_error);
} else {
thread->return_error.cmd = return_error;
binder_enqueue_work(thread->proc,
&thread->return_error.work,
&thread->todo);
binder_enqueue_thread_work(thread, &thread->return_error.work);
}
}
@ -3424,7 +3483,7 @@ static int binder_thread_write(struct binder_proc *proc,
w = binder_dequeue_work_head_ilocked(
&buf_node->async_todo);
if (!w) {
buf_node->has_async_transaction = 0;
buf_node->has_async_transaction = false;
} else {
binder_enqueue_work_ilocked(
w, &proc->todo);
@ -3522,10 +3581,9 @@ static int binder_thread_write(struct binder_proc *proc,
WARN_ON(thread->return_error.cmd !=
BR_OK);
thread->return_error.cmd = BR_ERROR;
binder_enqueue_work(
thread->proc,
&thread->return_error.work,
&thread->todo);
binder_enqueue_thread_work(
thread,
&thread->return_error.work);
binder_debug(
BINDER_DEBUG_FAILED_TRANSACTION,
"%d:%d BC_REQUEST_DEATH_NOTIFICATION failed\n",
@ -3605,9 +3663,9 @@ static int binder_thread_write(struct binder_proc *proc,
if (thread->looper &
(BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED))
binder_enqueue_work_ilocked(
&death->work,
&thread->todo);
binder_enqueue_thread_work_ilocked(
thread,
&death->work);
else {
binder_enqueue_work_ilocked(
&death->work,
@ -3662,8 +3720,8 @@ static int binder_thread_write(struct binder_proc *proc,
if (thread->looper &
(BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED))
binder_enqueue_work_ilocked(
&death->work, &thread->todo);
binder_enqueue_thread_work_ilocked(
thread, &death->work);
else {
binder_enqueue_work_ilocked(
&death->work,
@ -3837,6 +3895,8 @@ static int binder_thread_read(struct binder_proc *proc,
break;
}
w = binder_dequeue_work_head_ilocked(list);
if (binder_worklist_empty_ilocked(&thread->todo))
thread->process_todo = false;
switch (w->type) {
case BINDER_WORK_TRANSACTION: {
@ -4302,6 +4362,18 @@ static int binder_thread_release(struct binder_proc *proc,
if (t)
spin_lock(&t->lock);
}
/*
* If this thread used poll, make sure we remove the waitqueue
* from any epoll data structures holding it with POLLFREE.
* waitqueue_active() is safe to use here because we're holding
* the inner lock.
*/
if ((thread->looper & BINDER_LOOPER_STATE_POLL) &&
waitqueue_active(&thread->wait)) {
wake_up_poll(&thread->wait, POLLHUP | POLLFREE);
}
binder_inner_proc_unlock(thread->proc);
if (send_reply)
@ -4646,7 +4718,7 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
return 0;
err_bad_arg:
pr_err("binder_mmap: %d %lx-%lx %s failed %d\n",
pr_err("%s: %d %lx-%lx %s failed %d\n", __func__,
proc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
return ret;
}
@ -4656,7 +4728,7 @@ static int binder_open(struct inode *nodp, struct file *filp)
struct binder_proc *proc;
struct binder_device *binder_dev;
binder_debug(BINDER_DEBUG_OPEN_CLOSE, "binder_open: %d:%d\n",
binder_debug(BINDER_DEBUG_OPEN_CLOSE, "%s: %d:%d\n", __func__,
current->group_leader->pid, current->pid);
proc = kzalloc(sizeof(*proc), GFP_KERNEL);
@ -4695,7 +4767,7 @@ static int binder_open(struct inode *nodp, struct file *filp)
* anyway print all contexts that a given PID has, so this
* is not a problem.
*/
proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO,
proc->debugfs_entry = debugfs_create_file(strbuf, 0444,
binder_debugfs_dir_entry_proc,
(void *)(unsigned long)proc->pid,
&binder_proc_fops);
@ -5524,7 +5596,9 @@ static int __init binder_init(void)
struct binder_device *device;
struct hlist_node *tmp;
binder_alloc_shrinker_init();
ret = binder_alloc_shrinker_init();
if (ret)
return ret;
atomic_set(&binder_transaction_log.cur, ~0U);
atomic_set(&binder_transaction_log_failed.cur, ~0U);
@ -5536,27 +5610,27 @@ static int __init binder_init(void)
if (binder_debugfs_dir_entry_root) {
debugfs_create_file("state",
S_IRUGO,
0444,
binder_debugfs_dir_entry_root,
NULL,
&binder_state_fops);
debugfs_create_file("stats",
S_IRUGO,
0444,
binder_debugfs_dir_entry_root,
NULL,
&binder_stats_fops);
debugfs_create_file("transactions",
S_IRUGO,
0444,
binder_debugfs_dir_entry_root,
NULL,
&binder_transactions_fops);
debugfs_create_file("transaction_log",
S_IRUGO,
0444,
binder_debugfs_dir_entry_root,
&binder_transaction_log,
&binder_transaction_log_fops);
debugfs_create_file("failed_transaction_log",
S_IRUGO,
0444,
binder_debugfs_dir_entry_root,
&binder_transaction_log_failed,
&binder_transaction_log_fops);

View File

@ -281,6 +281,9 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
goto err_vm_insert_page_failed;
}
if (index + 1 > alloc->pages_high)
alloc->pages_high = index + 1;
trace_binder_alloc_page_end(alloc, index);
/* vm_insert_page does not seem to increment the refcount */
}
@ -324,11 +327,12 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
return vma ? -ENOMEM : -ESRCH;
}
struct binder_buffer *binder_alloc_new_buf_locked(struct binder_alloc *alloc,
size_t data_size,
size_t offsets_size,
size_t extra_buffers_size,
int is_async)
static struct binder_buffer *binder_alloc_new_buf_locked(
struct binder_alloc *alloc,
size_t data_size,
size_t offsets_size,
size_t extra_buffers_size,
int is_async)
{
struct rb_node *n = alloc->free_buffers.rb_node;
struct binder_buffer *buffer;
@ -666,7 +670,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
goto err_already_mapped;
}
area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);
area = get_vm_area(vma->vm_end - vma->vm_start, VM_ALLOC);
if (area == NULL) {
ret = -ENOMEM;
failure_string = "get_vm_area";
@ -853,6 +857,7 @@ void binder_alloc_print_pages(struct seq_file *m,
}
mutex_unlock(&alloc->mutex);
seq_printf(m, " pages: %d:%d:%d\n", active, lru, free);
seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high);
}
/**
@ -1002,8 +1007,14 @@ void binder_alloc_init(struct binder_alloc *alloc)
INIT_LIST_HEAD(&alloc->buffers);
}
void binder_alloc_shrinker_init(void)
int binder_alloc_shrinker_init(void)
{
list_lru_init(&binder_alloc_lru);
register_shrinker(&binder_shrinker);
int ret = list_lru_init(&binder_alloc_lru);
if (ret == 0) {
ret = register_shrinker(&binder_shrinker);
if (ret)
list_lru_destroy(&binder_alloc_lru);
}
return ret;
}

View File

@ -92,6 +92,7 @@ struct binder_lru_page {
* @pages: array of binder_lru_page
* @buffer_size: size of address space specified via mmap
* @pid: pid for associated binder_proc (invariant after init)
* @pages_high: high watermark of offset in @pages
*
* Bookkeeping structure for per-proc address space management for binder
* buffers. It is normally initialized during binder_init() and binder_mmap()
@ -112,6 +113,7 @@ struct binder_alloc {
size_t buffer_size;
uint32_t buffer_free;
int pid;
size_t pages_high;
};
#ifdef CONFIG_ANDROID_BINDER_IPC_SELFTEST
@ -128,7 +130,7 @@ extern struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
size_t extra_buffers_size,
int is_async);
extern void binder_alloc_init(struct binder_alloc *alloc);
void binder_alloc_shrinker_init(void);
extern int binder_alloc_shrinker_init(void);
extern void binder_alloc_vma_close(struct binder_alloc *alloc);
extern struct binder_buffer *
binder_alloc_prepare_to_free(struct binder_alloc *alloc,

View File

@ -441,3 +441,7 @@ static struct platform_driver img_ascii_lcd_driver = {
.remove = img_ascii_lcd_remove,
};
module_platform_driver(img_ascii_lcd_driver);
MODULE_DESCRIPTION("Imagination Technologies ASCII LCD Display");
MODULE_AUTHOR("Paul Burton <paul.burton@mips.com>");
MODULE_LICENSE("GPL");

View File

@ -20,6 +20,10 @@ config REGMAP_I2C
tristate
depends on I2C
config REGMAP_SLIMBUS
tristate
depends on SLIMBUS
config REGMAP_SPI
tristate
depends on SPI

View File

@ -8,6 +8,7 @@ obj-$(CONFIG_REGCACHE_COMPRESSED) += regcache-lzo.o
obj-$(CONFIG_DEBUG_FS) += regmap-debugfs.o
obj-$(CONFIG_REGMAP_AC97) += regmap-ac97.o
obj-$(CONFIG_REGMAP_I2C) += regmap-i2c.o
obj-$(CONFIG_REGMAP_SLIMBUS) += regmap-slimbus.o
obj-$(CONFIG_REGMAP_SPI) += regmap-spi.o
obj-$(CONFIG_REGMAP_SPMI) += regmap-spmi.o
obj-$(CONFIG_REGMAP_MMIO) += regmap-mmio.o

View File

@ -0,0 +1,80 @@
// SPDX-License-Identifier: GPL-2.0
// Copyright (c) 2017, Linaro Ltd.
#include <linux/regmap.h>
#include <linux/slimbus.h>
#include <linux/module.h>
#include "internal.h"
static int regmap_slimbus_byte_reg_read(void *context, unsigned int reg,
unsigned int *val)
{
struct slim_device *sdev = context;
int v;
v = slim_readb(sdev, reg);
if (v < 0)
return v;
*val = v;
return 0;
}
static int regmap_slimbus_byte_reg_write(void *context, unsigned int reg,
unsigned int val)
{
struct slim_device *sdev = context;
return slim_writeb(sdev, reg, val);
}
static struct regmap_bus regmap_slimbus_bus = {
.reg_write = regmap_slimbus_byte_reg_write,
.reg_read = regmap_slimbus_byte_reg_read,
.reg_format_endian_default = REGMAP_ENDIAN_LITTLE,
.val_format_endian_default = REGMAP_ENDIAN_LITTLE,
};
static const struct regmap_bus *regmap_get_slimbus(struct slim_device *slim,
const struct regmap_config *config)
{
if (config->val_bits == 8 && config->reg_bits == 8)
return &regmap_slimbus_bus;
return ERR_PTR(-ENOTSUPP);
}
struct regmap *__regmap_init_slimbus(struct slim_device *slimbus,
const struct regmap_config *config,
struct lock_class_key *lock_key,
const char *lock_name)
{
const struct regmap_bus *bus = regmap_get_slimbus(slimbus, config);
if (IS_ERR(bus))
return ERR_CAST(bus);
return __regmap_init(&slimbus->dev, bus, &slimbus->dev, config,
lock_key, lock_name);
}
EXPORT_SYMBOL_GPL(__regmap_init_slimbus);
struct regmap *__devm_regmap_init_slimbus(struct slim_device *slimbus,
const struct regmap_config *config,
struct lock_class_key *lock_key,
const char *lock_name)
{
const struct regmap_bus *bus = regmap_get_slimbus(slimbus, config);
if (IS_ERR(bus))
return ERR_CAST(bus);
return __devm_regmap_init(&slimbus->dev, bus, &slimbus, config,
lock_key, lock_name);
}
EXPORT_SYMBOL_GPL(__devm_regmap_init_slimbus);
MODULE_LICENSE("GPL v2");

View File

@ -659,17 +659,31 @@ static int lp_do_ioctl(unsigned int minor, unsigned int cmd,
return retval;
}
static int lp_set_timeout(unsigned int minor, struct timeval *par_timeout)
static int lp_set_timeout(unsigned int minor, s64 tv_sec, long tv_usec)
{
long to_jiffies;
/* Convert to jiffies, place in lp_table */
if ((par_timeout->tv_sec < 0) ||
(par_timeout->tv_usec < 0)) {
if (tv_sec < 0 || tv_usec < 0)
return -EINVAL;
/*
* we used to not check, so let's not make this fatal,
* but deal with user space passing a 32-bit tv_nsec in
* a 64-bit field, capping the timeout to 1 second
* worth of microseconds, and capping the total at
* MAX_JIFFY_OFFSET.
*/
if (tv_usec > 999999)
tv_usec = 999999;
if (tv_sec >= MAX_SEC_IN_JIFFIES - 1) {
to_jiffies = MAX_JIFFY_OFFSET;
} else {
to_jiffies = DIV_ROUND_UP(tv_usec, 1000000/HZ);
to_jiffies += tv_sec * (long) HZ;
}
to_jiffies = DIV_ROUND_UP(par_timeout->tv_usec, 1000000/HZ);
to_jiffies += par_timeout->tv_sec * (long) HZ;
if (to_jiffies <= 0) {
return -EINVAL;
}
@ -677,23 +691,43 @@ static int lp_set_timeout(unsigned int minor, struct timeval *par_timeout)
return 0;
}
static int lp_set_timeout32(unsigned int minor, void __user *arg)
{
s32 karg[2];
if (copy_from_user(karg, arg, sizeof(karg)))
return -EFAULT;
return lp_set_timeout(minor, karg[0], karg[1]);
}
static int lp_set_timeout64(unsigned int minor, void __user *arg)
{
s64 karg[2];
if (copy_from_user(karg, arg, sizeof(karg)))
return -EFAULT;
return lp_set_timeout(minor, karg[0], karg[1]);
}
static long lp_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
unsigned int minor;
struct timeval par_timeout;
int ret;
minor = iminor(file_inode(file));
mutex_lock(&lp_mutex);
switch (cmd) {
case LPSETTIMEOUT:
if (copy_from_user(&par_timeout, (void __user *)arg,
sizeof (struct timeval))) {
ret = -EFAULT;
case LPSETTIMEOUT_OLD:
if (BITS_PER_LONG == 32) {
ret = lp_set_timeout32(minor, (void __user *)arg);
break;
}
ret = lp_set_timeout(minor, &par_timeout);
/* fallthrough for 64-bit */
case LPSETTIMEOUT_NEW:
ret = lp_set_timeout64(minor, (void __user *)arg);
break;
default:
ret = lp_do_ioctl(minor, cmd, arg, (void __user *)arg);
@ -709,18 +743,19 @@ static long lp_compat_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
unsigned int minor;
struct timeval par_timeout;
int ret;
minor = iminor(file_inode(file));
mutex_lock(&lp_mutex);
switch (cmd) {
case LPSETTIMEOUT:
if (compat_get_timeval(&par_timeout, compat_ptr(arg))) {
ret = -EFAULT;
case LPSETTIMEOUT_OLD:
if (!COMPAT_USE_64BIT_TIME) {
ret = lp_set_timeout32(minor, (void __user *)arg);
break;
}
ret = lp_set_timeout(minor, &par_timeout);
/* fallthrough for x32 mode */
case LPSETTIMEOUT_NEW:
ret = lp_set_timeout64(minor, (void __user *)arg);
break;
#ifdef LP_STATS
case LPGETSTATS:
@ -865,7 +900,7 @@ static int __init lp_setup (char *str)
printk(KERN_INFO "lp: too many ports, %s ignored.\n",
str);
} else if (!strcmp(str, "reset")) {
reset = 1;
reset = true;
}
return 1;
}

View File

@ -107,6 +107,8 @@ static ssize_t read_mem(struct file *file, char __user *buf,
phys_addr_t p = *ppos;
ssize_t read, sz;
void *ptr;
char *bounce;
int err;
if (p != *ppos)
return 0;
@ -129,15 +131,22 @@ static ssize_t read_mem(struct file *file, char __user *buf,
}
#endif
bounce = kmalloc(PAGE_SIZE, GFP_KERNEL);
if (!bounce)
return -ENOMEM;
while (count > 0) {
unsigned long remaining;
int allowed;
sz = size_inside_page(p, count);
err = -EPERM;
allowed = page_is_allowed(p >> PAGE_SHIFT);
if (!allowed)
return -EPERM;
goto failed;
err = -EFAULT;
if (allowed == 2) {
/* Show zeros for restricted memory. */
remaining = clear_user(buf, sz);
@ -149,24 +158,32 @@ static ssize_t read_mem(struct file *file, char __user *buf,
*/
ptr = xlate_dev_mem_ptr(p);
if (!ptr)
return -EFAULT;
remaining = copy_to_user(buf, ptr, sz);
goto failed;
err = probe_kernel_read(bounce, ptr, sz);
unxlate_dev_mem_ptr(p, ptr);
if (err)
goto failed;
remaining = copy_to_user(buf, bounce, sz);
}
if (remaining)
return -EFAULT;
goto failed;
buf += sz;
p += sz;
count -= sz;
read += sz;
}
kfree(bounce);
*ppos += read;
return read;
failed:
kfree(bounce);
return err;
}
static ssize_t write_mem(struct file *file, const char __user *buf,

View File

@ -4,7 +4,7 @@
config XILLYBUS
tristate "Xillybus generic FPGA interface"
depends on PCI || (OF_ADDRESS && OF_IRQ)
depends on PCI || OF
select CRC32
help
Xillybus is a generic interface for peripherals designed on
@ -24,7 +24,7 @@ config XILLYBUS_PCIE
config XILLYBUS_OF
tristate "Xillybus over Device Tree"
depends on OF_ADDRESS && OF_IRQ && HAS_DMA
depends on OF && HAS_DMA
help
Set to M if you want Xillybus to find its resources from the
Open Firmware Flattened Device Tree. If the target is an embedded

View File

@ -15,10 +15,6 @@
#include <linux/slab.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/of_platform.h>
#include <linux/err.h>
#include "xillybus.h"
@ -123,7 +119,7 @@ static int xilly_drv_probe(struct platform_device *op)
struct xilly_endpoint *endpoint;
int rc;
int irq;
struct resource res;
struct resource *res;
struct xilly_endpoint_hardware *ephw = &of_hw;
if (of_property_read_bool(dev->of_node, "dma-coherent"))
@ -136,13 +132,13 @@ static int xilly_drv_probe(struct platform_device *op)
dev_set_drvdata(dev, endpoint);
rc = of_address_to_resource(dev->of_node, 0, &res);
endpoint->registers = devm_ioremap_resource(dev, &res);
res = platform_get_resource(op, IORESOURCE_MEM, 0);
endpoint->registers = devm_ioremap_resource(dev, res);
if (IS_ERR(endpoint->registers))
return PTR_ERR(endpoint->registers);
irq = irq_of_parse_and_map(dev->of_node, 0);
irq = platform_get_irq(op, 0);
rc = devm_request_irq(dev, irq, xillybus_isr, 0, xillyname, endpoint);

View File

@ -75,9 +75,9 @@ static void __init eisa_name_device(struct eisa_device *edev)
static char __init *decode_eisa_sig(unsigned long addr)
{
static char sig_str[EISA_SIG_LEN];
static char sig_str[EISA_SIG_LEN];
u8 sig[4];
u16 rev;
u16 rev;
int i;
for (i = 0; i < 4; i++) {
@ -96,14 +96,14 @@ static char __init *decode_eisa_sig(unsigned long addr)
if (!i && (sig[0] & 0x80))
return NULL;
}
sig_str[0] = ((sig[0] >> 2) & 0x1f) + ('A' - 1);
sig_str[1] = (((sig[0] & 3) << 3) | (sig[1] >> 5)) + ('A' - 1);
sig_str[2] = (sig[1] & 0x1f) + ('A' - 1);
rev = (sig[2] << 8) | sig[3];
sprintf(sig_str + 3, "%04X", rev);
return sig_str;
sig_str[0] = ((sig[0] >> 2) & 0x1f) + ('A' - 1);
sig_str[1] = (((sig[0] & 3) << 3) | (sig[1] >> 5)) + ('A' - 1);
sig_str[2] = (sig[1] & 0x1f) + ('A' - 1);
rev = (sig[2] << 8) | sig[3];
sprintf(sig_str + 3, "%04X", rev);
return sig_str;
}
static int eisa_bus_match(struct device *dev, struct device_driver *drv)
@ -198,7 +198,7 @@ static int __init eisa_init_device(struct eisa_root_device *root,
sig = decode_eisa_sig(sig_addr);
if (!sig)
return -1; /* No EISA device here */
memcpy(edev->id.sig, sig, EISA_SIG_LEN);
edev->slot = slot;
edev->state = inb(SLOT_ADDRESS(root, slot) + EISA_CONFIG_OFFSET)
@ -222,7 +222,7 @@ static int __init eisa_init_device(struct eisa_root_device *root,
if (is_forced_dev(enable_dev, enable_dev_count, root, edev))
edev->state = EISA_CONFIG_ENABLED | EISA_CONFIG_FORCED;
if (is_forced_dev(disable_dev, disable_dev_count, root, edev))
edev->state = EISA_CONFIG_FORCED;
@ -275,7 +275,7 @@ static int __init eisa_request_resources(struct eisa_root_device *root,
edev->res[i].start = edev->res[i].end = 0;
continue;
}
if (slot) {
edev->res[i].name = NULL;
edev->res[i].start = SLOT_ADDRESS(root, slot)
@ -295,7 +295,7 @@ static int __init eisa_request_resources(struct eisa_root_device *root,
}
return 0;
failed:
while (--i >= 0)
release_resource(&edev->res[i]);
@ -314,7 +314,7 @@ static void __init eisa_release_resources(struct eisa_device *edev)
static int __init eisa_probe(struct eisa_root_device *root)
{
int i, c;
int i, c;
struct eisa_device *edev;
char *enabled_str;
@ -322,16 +322,14 @@ static int __init eisa_probe(struct eisa_root_device *root)
/* First try to get hold of slot 0. If there is no device
* here, simply fail, unless root->force_probe is set. */
edev = kzalloc(sizeof(*edev), GFP_KERNEL);
if (!edev) {
dev_err(root->dev, "EISA: Couldn't allocate mainboard slot\n");
if (!edev)
return -ENOMEM;
}
if (eisa_request_resources(root, edev, 0)) {
dev_warn(root->dev,
"EISA: Cannot allocate resource for mainboard\n");
"EISA: Cannot allocate resource for mainboard\n");
kfree(edev);
if (!root->force_probe)
return -EBUSY;
@ -350,14 +348,14 @@ static int __init eisa_probe(struct eisa_root_device *root)
if (eisa_register_device(edev)) {
dev_err(&edev->dev, "EISA: Failed to register %s\n",
edev->id.sig);
edev->id.sig);
eisa_release_resources(edev);
kfree(edev);
}
force_probe:
for (c = 0, i = 1; i <= root->slots; i++) {
for (c = 0, i = 1; i <= root->slots; i++) {
edev = kzalloc(sizeof(*edev), GFP_KERNEL);
if (!edev) {
dev_err(root->dev, "EISA: Out of memory for slot %d\n",
@ -367,8 +365,8 @@ static int __init eisa_probe(struct eisa_root_device *root)
if (eisa_request_resources(root, edev, i)) {
dev_warn(root->dev,
"Cannot allocate resource for EISA slot %d\n",
i);
"Cannot allocate resource for EISA slot %d\n",
i);
kfree(edev);
continue;
}
@ -395,11 +393,11 @@ static int __init eisa_probe(struct eisa_root_device *root)
if (eisa_register_device(edev)) {
dev_err(&edev->dev, "EISA: Failed to register %s\n",
edev->id.sig);
edev->id.sig);
eisa_release_resources(edev);
kfree(edev);
}
}
}
dev_info(root->dev, "EISA: Detected %d card%s\n", c, c == 1 ? "" : "s");
return 0;
@ -422,7 +420,7 @@ int __init eisa_root_register(struct eisa_root_device *root)
* been already registered. This prevents the virtual root
* device from registering after the real one has, for
* example... */
root->eisa_root_res.name = eisa_root_res.name;
root->eisa_root_res.start = root->res->start;
root->eisa_root_res.end = root->res->end;
@ -431,7 +429,7 @@ int __init eisa_root_register(struct eisa_root_device *root)
err = request_resource(&eisa_root_res, &root->eisa_root_res);
if (err)
return err;
root->bus_nr = eisa_bus_count++;
err = eisa_probe(root);
@ -444,7 +442,7 @@ int __init eisa_root_register(struct eisa_root_device *root)
static int __init eisa_init(void)
{
int r;
r = bus_register(&eisa_bus_type);
if (r)
return r;

View File

@ -50,11 +50,11 @@ static int __init pci_eisa_init(struct pci_dev *pdev)
return -1;
}
pci_eisa_root.dev = &pdev->dev;
pci_eisa_root.res = bus_res;
pci_eisa_root.bus_base_addr = bus_res->start;
pci_eisa_root.slots = EISA_MAX_SLOTS;
pci_eisa_root.dma_mask = pdev->dma_mask;
pci_eisa_root.dev = &pdev->dev;
pci_eisa_root.res = bus_res;
pci_eisa_root.bus_base_addr = bus_res->start;
pci_eisa_root.slots = EISA_MAX_SLOTS;
pci_eisa_root.dma_mask = pdev->dma_mask;
dev_set_drvdata(pci_eisa_root.dev, &pci_eisa_root);
if (eisa_root_register (&pci_eisa_root)) {

View File

@ -35,11 +35,11 @@ static struct platform_device eisa_root_dev = {
};
static struct eisa_root_device eisa_bus_root = {
.dev = &eisa_root_dev.dev,
.bus_base_addr = 0,
.res = &ioport_resource,
.slots = EISA_MAX_SLOTS,
.dma_mask = 0xffffffff,
.dev = &eisa_root_dev.dev,
.bus_base_addr = 0,
.res = &ioport_resource,
.slots = EISA_MAX_SLOTS,
.dma_mask = 0xffffffff,
};
static void virtual_eisa_release (struct device *dev)
@ -50,13 +50,12 @@ static void virtual_eisa_release (struct device *dev)
static int __init virtual_eisa_root_init (void)
{
int r;
if ((r = platform_device_register (&eisa_root_dev))) {
return r;
}
if ((r = platform_device_register (&eisa_root_dev)))
return r;
eisa_bus_root.force_probe = force_probe;
dev_set_drvdata(&eisa_root_dev.dev, &eisa_bus_root);
if (eisa_root_register (&eisa_bus_root)) {

View File

@ -144,7 +144,7 @@ static int adc_jack_probe(struct platform_device *pdev)
return err;
data->irq = platform_get_irq(pdev, 0);
if (!data->irq) {
if (data->irq < 0) {
dev_err(&pdev->dev, "platform_get_irq failed\n");
return -ENODEV;
}

View File

@ -1,6 +1,7 @@
/*
* extcon-axp288.c - X-Power AXP288 PMIC extcon cable detection driver
*
* Copyright (C) 2016-2017 Hans de Goede <hdegoede@redhat.com>
* Copyright (C) 2015 Intel Corporation
* Author: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
*
@ -97,9 +98,11 @@ struct axp288_extcon_info {
struct device *dev;
struct regmap *regmap;
struct regmap_irq_chip_data *regmap_irqc;
struct delayed_work det_work;
int irq[EXTCON_IRQ_END];
struct extcon_dev *edev;
unsigned int previous_cable;
bool first_detect_done;
};
/* Power up/down reason string array */
@ -137,6 +140,25 @@ static void axp288_extcon_log_rsi(struct axp288_extcon_info *info)
regmap_write(info->regmap, AXP288_PS_BOOT_REASON_REG, clear_mask);
}
static void axp288_chrg_detect_complete(struct axp288_extcon_info *info)
{
/*
* We depend on other drivers to do things like mux the data lines,
* enable/disable vbus based on the id-pin, etc. Sometimes the BIOS has
* not set these things up correctly resulting in the initial charger
* cable type detection giving a wrong result and we end up not charging
* or charging at only 0.5A.
*
* So we schedule a second cable type detection after 2 seconds to
* give the other drivers time to load and do their thing.
*/
if (!info->first_detect_done) {
queue_delayed_work(system_wq, &info->det_work,
msecs_to_jiffies(2000));
info->first_detect_done = true;
}
}
static int axp288_handle_chrg_det_event(struct axp288_extcon_info *info)
{
int ret, stat, cfg, pwr_stat;
@ -183,8 +205,8 @@ static int axp288_handle_chrg_det_event(struct axp288_extcon_info *info)
cable = EXTCON_CHG_USB_DCP;
break;
default:
dev_warn(info->dev,
"disconnect or unknown or ID event\n");
dev_warn(info->dev, "unknown (reserved) bc detect result\n");
cable = EXTCON_CHG_USB_SDP;
}
no_vbus:
@ -201,6 +223,8 @@ static int axp288_handle_chrg_det_event(struct axp288_extcon_info *info)
info->previous_cable = cable;
}
axp288_chrg_detect_complete(info);
return 0;
dev_det_ret:
@ -222,8 +246,11 @@ static irqreturn_t axp288_extcon_isr(int irq, void *data)
return IRQ_HANDLED;
}
static void axp288_extcon_enable(struct axp288_extcon_info *info)
static void axp288_extcon_det_work(struct work_struct *work)
{
struct axp288_extcon_info *info =
container_of(work, struct axp288_extcon_info, det_work.work);
regmap_update_bits(info->regmap, AXP288_BC_GLOBAL_REG,
BC_GLOBAL_RUN, 0);
/* Enable the charger detection logic */
@ -245,6 +272,7 @@ static int axp288_extcon_probe(struct platform_device *pdev)
info->regmap = axp20x->regmap;
info->regmap_irqc = axp20x->regmap_irqc;
info->previous_cable = EXTCON_NONE;
INIT_DELAYED_WORK(&info->det_work, axp288_extcon_det_work);
platform_set_drvdata(pdev, info);
@ -290,7 +318,7 @@ static int axp288_extcon_probe(struct platform_device *pdev)
}
/* Start charger cable type detection */
axp288_extcon_enable(info);
queue_delayed_work(system_wq, &info->det_work, 0);
return 0;
}

View File

@ -266,7 +266,7 @@ static int max77693_muic_set_debounce_time(struct max77693_muic_info *info,
static int max77693_muic_set_path(struct max77693_muic_info *info,
u8 val, bool attached)
{
int ret = 0;
int ret;
unsigned int ctrl1, ctrl2 = 0;
if (attached)

View File

@ -204,7 +204,7 @@ static int max8997_muic_set_debounce_time(struct max8997_muic_info *info,
static int max8997_muic_set_path(struct max8997_muic_info *info,
u8 val, bool attached)
{
int ret = 0;
int ret;
u8 ctrl1, ctrl2 = 0;
if (attached)

View File

@ -11,33 +11,6 @@ menuconfig FPGA
if FPGA
config FPGA_REGION
tristate "FPGA Region"
depends on OF && FPGA_BRIDGE
help
FPGA Regions allow loading FPGA images under control of
the Device Tree.
config FPGA_MGR_ICE40_SPI
tristate "Lattice iCE40 SPI"
depends on OF && SPI
help
FPGA manager driver support for Lattice iCE40 FPGAs over SPI.
config FPGA_MGR_ALTERA_CVP
tristate "Altera Arria-V/Cyclone-V/Stratix-V CvP FPGA Manager"
depends on PCI
help
FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V
and Arria 10 Altera FPGAs using the CvP interface over PCIe.
config FPGA_MGR_ALTERA_PS_SPI
tristate "Altera FPGA Passive Serial over SPI"
depends on SPI
help
FPGA manager driver support for Altera Arria/Cyclone/Stratix
using the passive serial interface over SPI.
config FPGA_MGR_SOCFPGA
tristate "Altera SOCFPGA FPGA Manager"
depends on ARCH_SOCFPGA || COMPILE_TEST
@ -51,19 +24,31 @@ config FPGA_MGR_SOCFPGA_A10
help
FPGA manager driver support for Altera Arria10 SoCFPGA.
config FPGA_MGR_TS73XX
tristate "Technologic Systems TS-73xx SBC FPGA Manager"
depends on ARCH_EP93XX && MACH_TS72XX
help
FPGA manager driver support for the Altera Cyclone II FPGA
present on the TS-73xx SBC boards.
config ALTERA_PR_IP_CORE
tristate "Altera Partial Reconfiguration IP Core"
help
Core driver support for Altera Partial Reconfiguration IP component
config FPGA_MGR_XILINX_SPI
tristate "Xilinx Configuration over Slave Serial (SPI)"
config ALTERA_PR_IP_CORE_PLAT
tristate "Platform support of Altera Partial Reconfiguration IP Core"
depends on ALTERA_PR_IP_CORE && OF && HAS_IOMEM
help
Platform driver support for Altera Partial Reconfiguration IP
component
config FPGA_MGR_ALTERA_PS_SPI
tristate "Altera FPGA Passive Serial over SPI"
depends on SPI
help
FPGA manager driver support for Xilinx FPGA configuration
over slave serial interface.
FPGA manager driver support for Altera Arria/Cyclone/Stratix
using the passive serial interface over SPI.
config FPGA_MGR_ALTERA_CVP
tristate "Altera Arria-V/Cyclone-V/Stratix-V CvP FPGA Manager"
depends on PCI
help
FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V
and Arria 10 Altera FPGAs using the CvP interface over PCIe.
config FPGA_MGR_ZYNQ_FPGA
tristate "Xilinx Zynq FPGA"
@ -72,9 +57,28 @@ config FPGA_MGR_ZYNQ_FPGA
help
FPGA manager driver support for Xilinx Zynq FPGAs.
config FPGA_MGR_XILINX_SPI
tristate "Xilinx Configuration over Slave Serial (SPI)"
depends on SPI
help
FPGA manager driver support for Xilinx FPGA configuration
over slave serial interface.
config FPGA_MGR_ICE40_SPI
tristate "Lattice iCE40 SPI"
depends on OF && SPI
help
FPGA manager driver support for Lattice iCE40 FPGAs over SPI.
config FPGA_MGR_TS73XX
tristate "Technologic Systems TS-73xx SBC FPGA Manager"
depends on ARCH_EP93XX && MACH_TS72XX
help
FPGA manager driver support for the Altera Cyclone II FPGA
present on the TS-73xx SBC boards.
config FPGA_BRIDGE
tristate "FPGA Bridge Framework"
depends on OF
help
Say Y here if you want to support bridges connected between host
processors and FPGAs or between FPGAs.
@ -95,18 +99,6 @@ config ALTERA_FREEZE_BRIDGE
isolate one region of the FPGA from the busses while that
region is being reprogrammed.
config ALTERA_PR_IP_CORE
tristate "Altera Partial Reconfiguration IP Core"
help
Core driver support for Altera Partial Reconfiguration IP component
config ALTERA_PR_IP_CORE_PLAT
tristate "Platform support of Altera Partial Reconfiguration IP Core"
depends on ALTERA_PR_IP_CORE && OF && HAS_IOMEM
help
Platform driver support for Altera Partial Reconfiguration IP
component
config XILINX_PR_DECOUPLER
tristate "Xilinx LogiCORE PR Decoupler"
depends on FPGA_BRIDGE
@ -117,4 +109,19 @@ config XILINX_PR_DECOUPLER
region of the FPGA from the busses while that region is
being reprogrammed during partial reconfig.
config FPGA_REGION
tristate "FPGA Region"
depends on FPGA_BRIDGE
help
FPGA Region common code. A FPGA Region controls a FPGA Manager
and the FPGA Bridges associated with either a reconfigurable
region of an FPGA or a whole FPGA.
config OF_FPGA_REGION
tristate "FPGA Region Device Tree Overlay Support"
depends on OF && FPGA_REGION
help
Support for loading FPGA images by applying a Device Tree
overlay.
endif # FPGA

View File

@ -26,3 +26,4 @@ obj-$(CONFIG_XILINX_PR_DECOUPLER) += xilinx-pr-decoupler.o
# High Level Interfaces
obj-$(CONFIG_FPGA_REGION) += fpga-region.o
obj-$(CONFIG_OF_FPGA_REGION) += of-fpga-region.o

View File

@ -2,6 +2,7 @@
* FPGA Bridge Framework Driver
*
* Copyright (C) 2013-2016 Altera Corporation, All Rights Reserved.
* Copyright (C) 2017 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@ -70,32 +71,13 @@ int fpga_bridge_disable(struct fpga_bridge *bridge)
}
EXPORT_SYMBOL_GPL(fpga_bridge_disable);
/**
* of_fpga_bridge_get - get an exclusive reference to a fpga bridge
*
* @np: node pointer of a FPGA bridge
* @info: fpga image specific information
*
* Return fpga_bridge struct if successful.
* Return -EBUSY if someone already has a reference to the bridge.
* Return -ENODEV if @np is not a FPGA Bridge.
*/
struct fpga_bridge *of_fpga_bridge_get(struct device_node *np,
struct fpga_image_info *info)
static struct fpga_bridge *__fpga_bridge_get(struct device *dev,
struct fpga_image_info *info)
{
struct device *dev;
struct fpga_bridge *bridge;
int ret = -ENODEV;
dev = class_find_device(fpga_bridge_class, NULL, np,
fpga_bridge_of_node_match);
if (!dev)
goto err_dev;
bridge = to_fpga_bridge(dev);
if (!bridge)
goto err_dev;
bridge->info = info;
@ -117,8 +99,58 @@ struct fpga_bridge *of_fpga_bridge_get(struct device_node *np,
put_device(dev);
return ERR_PTR(ret);
}
/**
* of_fpga_bridge_get - get an exclusive reference to a fpga bridge
*
* @np: node pointer of a FPGA bridge
* @info: fpga image specific information
*
* Return fpga_bridge struct if successful.
* Return -EBUSY if someone already has a reference to the bridge.
* Return -ENODEV if @np is not a FPGA Bridge.
*/
struct fpga_bridge *of_fpga_bridge_get(struct device_node *np,
struct fpga_image_info *info)
{
struct device *dev;
dev = class_find_device(fpga_bridge_class, NULL, np,
fpga_bridge_of_node_match);
if (!dev)
return ERR_PTR(-ENODEV);
return __fpga_bridge_get(dev, info);
}
EXPORT_SYMBOL_GPL(of_fpga_bridge_get);
static int fpga_bridge_dev_match(struct device *dev, const void *data)
{
return dev->parent == data;
}
/**
* fpga_bridge_get - get an exclusive reference to a fpga bridge
* @dev: parent device that fpga bridge was registered with
*
* Given a device, get an exclusive reference to a fpga bridge.
*
* Return: fpga manager struct or IS_ERR() condition containing error code.
*/
struct fpga_bridge *fpga_bridge_get(struct device *dev,
struct fpga_image_info *info)
{
struct device *bridge_dev;
bridge_dev = class_find_device(fpga_bridge_class, NULL, dev,
fpga_bridge_dev_match);
if (!bridge_dev)
return ERR_PTR(-ENODEV);
return __fpga_bridge_get(bridge_dev, info);
}
EXPORT_SYMBOL_GPL(fpga_bridge_get);
/**
* fpga_bridge_put - release a reference to a bridge
*
@ -206,7 +238,7 @@ void fpga_bridges_put(struct list_head *bridge_list)
EXPORT_SYMBOL_GPL(fpga_bridges_put);
/**
* fpga_bridges_get_to_list - get a bridge, add it to a list
* of_fpga_bridge_get_to_list - get a bridge, add it to a list
*
* @np: node pointer of a FPGA bridge
* @info: fpga image specific information
@ -216,14 +248,44 @@ EXPORT_SYMBOL_GPL(fpga_bridges_put);
*
* Return 0 for success, error code from of_fpga_bridge_get() othewise.
*/
int fpga_bridge_get_to_list(struct device_node *np,
int of_fpga_bridge_get_to_list(struct device_node *np,
struct fpga_image_info *info,
struct list_head *bridge_list)
{
struct fpga_bridge *bridge;
unsigned long flags;
bridge = of_fpga_bridge_get(np, info);
if (IS_ERR(bridge))
return PTR_ERR(bridge);
spin_lock_irqsave(&bridge_list_lock, flags);
list_add(&bridge->node, bridge_list);
spin_unlock_irqrestore(&bridge_list_lock, flags);
return 0;
}
EXPORT_SYMBOL_GPL(of_fpga_bridge_get_to_list);
/**
* fpga_bridge_get_to_list - given device, get a bridge, add it to a list
*
* @dev: FPGA bridge device
* @info: fpga image specific information
* @bridge_list: list of FPGA bridges
*
* Get an exclusive reference to the bridge and and it to the list.
*
* Return 0 for success, error code from fpga_bridge_get() othewise.
*/
int fpga_bridge_get_to_list(struct device *dev,
struct fpga_image_info *info,
struct list_head *bridge_list)
{
struct fpga_bridge *bridge;
unsigned long flags;
bridge = of_fpga_bridge_get(np, info);
bridge = fpga_bridge_get(dev, info);
if (IS_ERR(bridge))
return PTR_ERR(bridge);
@ -303,6 +365,7 @@ int fpga_bridge_register(struct device *dev, const char *name,
bridge->priv = priv;
device_initialize(&bridge->dev);
bridge->dev.groups = br_ops->groups;
bridge->dev.class = fpga_bridge_class;
bridge->dev.parent = dev;
bridge->dev.of_node = dev->of_node;
@ -381,7 +444,7 @@ static void __exit fpga_bridge_dev_exit(void)
}
MODULE_DESCRIPTION("FPGA Bridge Driver");
MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>");
MODULE_AUTHOR("Alan Tull <atull@kernel.org>");
MODULE_LICENSE("GPL v2");
subsys_initcall(fpga_bridge_dev_init);

View File

@ -2,6 +2,7 @@
* FPGA Manager Core
*
* Copyright (C) 2013-2015 Altera Corporation
* Copyright (C) 2017 Intel Corporation
*
* With code from the mailing list:
* Copyright (C) 2013 Xilinx, Inc.
@ -31,6 +32,40 @@
static DEFINE_IDA(fpga_mgr_ida);
static struct class *fpga_mgr_class;
struct fpga_image_info *fpga_image_info_alloc(struct device *dev)
{
struct fpga_image_info *info;
get_device(dev);
info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL);
if (!info) {
put_device(dev);
return NULL;
}
info->dev = dev;
return info;
}
EXPORT_SYMBOL_GPL(fpga_image_info_alloc);
void fpga_image_info_free(struct fpga_image_info *info)
{
struct device *dev;
if (!info)
return;
dev = info->dev;
if (info->firmware_name)
devm_kfree(dev, info->firmware_name);
devm_kfree(dev, info);
put_device(dev);
}
EXPORT_SYMBOL_GPL(fpga_image_info_free);
/*
* Call the low level driver's write_init function. This will do the
* device-specific things to get the FPGA into the state where it is ready to
@ -137,8 +172,9 @@ static int fpga_mgr_write_complete(struct fpga_manager *mgr,
*
* Return: 0 on success, negative error code otherwise.
*/
int fpga_mgr_buf_load_sg(struct fpga_manager *mgr, struct fpga_image_info *info,
struct sg_table *sgt)
static int fpga_mgr_buf_load_sg(struct fpga_manager *mgr,
struct fpga_image_info *info,
struct sg_table *sgt)
{
int ret;
@ -170,7 +206,6 @@ int fpga_mgr_buf_load_sg(struct fpga_manager *mgr, struct fpga_image_info *info,
return fpga_mgr_write_complete(mgr, info);
}
EXPORT_SYMBOL_GPL(fpga_mgr_buf_load_sg);
static int fpga_mgr_buf_load_mapped(struct fpga_manager *mgr,
struct fpga_image_info *info,
@ -210,8 +245,9 @@ static int fpga_mgr_buf_load_mapped(struct fpga_manager *mgr,
*
* Return: 0 on success, negative error code otherwise.
*/
int fpga_mgr_buf_load(struct fpga_manager *mgr, struct fpga_image_info *info,
const char *buf, size_t count)
static int fpga_mgr_buf_load(struct fpga_manager *mgr,
struct fpga_image_info *info,
const char *buf, size_t count)
{
struct page **pages;
struct sg_table sgt;
@ -266,7 +302,6 @@ int fpga_mgr_buf_load(struct fpga_manager *mgr, struct fpga_image_info *info,
return rc;
}
EXPORT_SYMBOL_GPL(fpga_mgr_buf_load);
/**
* fpga_mgr_firmware_load - request firmware and load to fpga
@ -282,9 +317,9 @@ EXPORT_SYMBOL_GPL(fpga_mgr_buf_load);
*
* Return: 0 on success, negative error code otherwise.
*/
int fpga_mgr_firmware_load(struct fpga_manager *mgr,
struct fpga_image_info *info,
const char *image_name)
static int fpga_mgr_firmware_load(struct fpga_manager *mgr,
struct fpga_image_info *info,
const char *image_name)
{
struct device *dev = &mgr->dev;
const struct firmware *fw;
@ -307,7 +342,18 @@ int fpga_mgr_firmware_load(struct fpga_manager *mgr,
return ret;
}
EXPORT_SYMBOL_GPL(fpga_mgr_firmware_load);
int fpga_mgr_load(struct fpga_manager *mgr, struct fpga_image_info *info)
{
if (info->sgt)
return fpga_mgr_buf_load_sg(mgr, info, info->sgt);
if (info->buf && info->count)
return fpga_mgr_buf_load(mgr, info, info->buf, info->count);
if (info->firmware_name)
return fpga_mgr_firmware_load(mgr, info, info->firmware_name);
return -EINVAL;
}
EXPORT_SYMBOL_GPL(fpga_mgr_load);
static const char * const state_str[] = {
[FPGA_MGR_STATE_UNKNOWN] = "unknown",
@ -364,28 +410,17 @@ ATTRIBUTE_GROUPS(fpga_mgr);
static struct fpga_manager *__fpga_mgr_get(struct device *dev)
{
struct fpga_manager *mgr;
int ret = -ENODEV;
mgr = to_fpga_manager(dev);
if (!mgr)
goto err_dev;
/* Get exclusive use of fpga manager */
if (!mutex_trylock(&mgr->ref_mutex)) {
ret = -EBUSY;
goto err_dev;
}
if (!try_module_get(dev->parent->driver->owner))
goto err_ll_mod;
goto err_dev;
return mgr;
err_ll_mod:
mutex_unlock(&mgr->ref_mutex);
err_dev:
put_device(dev);
return ERR_PTR(ret);
return ERR_PTR(-ENODEV);
}
static int fpga_mgr_dev_match(struct device *dev, const void *data)
@ -394,10 +429,10 @@ static int fpga_mgr_dev_match(struct device *dev, const void *data)
}
/**
* fpga_mgr_get - get an exclusive reference to a fpga mgr
* fpga_mgr_get - get a reference to a fpga mgr
* @dev: parent device that fpga mgr was registered with
*
* Given a device, get an exclusive reference to a fpga mgr.
* Given a device, get a reference to a fpga mgr.
*
* Return: fpga manager struct or IS_ERR() condition containing error code.
*/
@ -418,10 +453,10 @@ static int fpga_mgr_of_node_match(struct device *dev, const void *data)
}
/**
* of_fpga_mgr_get - get an exclusive reference to a fpga mgr
* of_fpga_mgr_get - get a reference to a fpga mgr
* @node: device node
*
* Given a device node, get an exclusive reference to a fpga mgr.
* Given a device node, get a reference to a fpga mgr.
*
* Return: fpga manager struct or IS_ERR() condition containing error code.
*/
@ -445,11 +480,40 @@ EXPORT_SYMBOL_GPL(of_fpga_mgr_get);
void fpga_mgr_put(struct fpga_manager *mgr)
{
module_put(mgr->dev.parent->driver->owner);
mutex_unlock(&mgr->ref_mutex);
put_device(&mgr->dev);
}
EXPORT_SYMBOL_GPL(fpga_mgr_put);
/**
* fpga_mgr_lock - Lock FPGA manager for exclusive use
* @mgr: fpga manager
*
* Given a pointer to FPGA Manager (from fpga_mgr_get() or
* of_fpga_mgr_put()) attempt to get the mutex.
*
* Return: 0 for success or -EBUSY
*/
int fpga_mgr_lock(struct fpga_manager *mgr)
{
if (!mutex_trylock(&mgr->ref_mutex)) {
dev_err(&mgr->dev, "FPGA manager is in use.\n");
return -EBUSY;
}
return 0;
}
EXPORT_SYMBOL_GPL(fpga_mgr_lock);
/**
* fpga_mgr_unlock - Unlock FPGA manager
* @mgr: fpga manager
*/
void fpga_mgr_unlock(struct fpga_manager *mgr)
{
mutex_unlock(&mgr->ref_mutex);
}
EXPORT_SYMBOL_GPL(fpga_mgr_unlock);
/**
* fpga_mgr_register - register a low level fpga manager driver
* @dev: fpga manager device from pdev
@ -503,6 +567,7 @@ int fpga_mgr_register(struct device *dev, const char *name,
device_initialize(&mgr->dev);
mgr->dev.class = fpga_mgr_class;
mgr->dev.groups = mops->groups;
mgr->dev.parent = dev;
mgr->dev.of_node = dev->of_node;
mgr->dev.id = id;
@ -578,7 +643,7 @@ static void __exit fpga_mgr_class_exit(void)
ida_destroy(&fpga_mgr_ida);
}
MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>");
MODULE_AUTHOR("Alan Tull <atull@kernel.org>");
MODULE_DESCRIPTION("FPGA manager framework");
MODULE_LICENSE("GPL v2");

View File

@ -2,6 +2,7 @@
* FPGA Region - Device Tree support for FPGA programming under Linux
*
* Copyright (C) 2013-2016 Altera Corporation
* Copyright (C) 2017 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@ -18,61 +19,30 @@
#include <linux/fpga/fpga-bridge.h>
#include <linux/fpga/fpga-mgr.h>
#include <linux/fpga/fpga-region.h>
#include <linux/idr.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/of_platform.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
/**
* struct fpga_region - FPGA Region structure
* @dev: FPGA Region device
* @mutex: enforces exclusive reference to region
* @bridge_list: list of FPGA bridges specified in region
* @info: fpga image specific information
*/
struct fpga_region {
struct device dev;
struct mutex mutex; /* for exclusive reference to region */
struct list_head bridge_list;
struct fpga_image_info *info;
};
#define to_fpga_region(d) container_of(d, struct fpga_region, dev)
static DEFINE_IDA(fpga_region_ida);
static struct class *fpga_region_class;
static const struct of_device_id fpga_region_of_match[] = {
{ .compatible = "fpga-region", },
{},
};
MODULE_DEVICE_TABLE(of, fpga_region_of_match);
static int fpga_region_of_node_match(struct device *dev, const void *data)
{
return dev->of_node == data;
}
/**
* fpga_region_find - find FPGA region
* @np: device node of FPGA Region
* Caller will need to put_device(&region->dev) when done.
* Returns FPGA Region struct or NULL
*/
static struct fpga_region *fpga_region_find(struct device_node *np)
struct fpga_region *fpga_region_class_find(
struct device *start, const void *data,
int (*match)(struct device *, const void *))
{
struct device *dev;
dev = class_find_device(fpga_region_class, NULL, np,
fpga_region_of_node_match);
dev = class_find_device(fpga_region_class, start, data, match);
if (!dev)
return NULL;
return to_fpga_region(dev);
}
EXPORT_SYMBOL_GPL(fpga_region_class_find);
/**
* fpga_region_get - get an exclusive reference to a fpga region
@ -94,15 +64,13 @@ static struct fpga_region *fpga_region_get(struct fpga_region *region)
}
get_device(dev);
of_node_get(dev->of_node);
if (!try_module_get(dev->parent->driver->owner)) {
of_node_put(dev->of_node);
put_device(dev);
mutex_unlock(&region->mutex);
return ERR_PTR(-ENODEV);
}
dev_dbg(&region->dev, "get\n");
dev_dbg(dev, "get\n");
return region;
}
@ -116,403 +84,99 @@ static void fpga_region_put(struct fpga_region *region)
{
struct device *dev = &region->dev;
dev_dbg(&region->dev, "put\n");
dev_dbg(dev, "put\n");
module_put(dev->parent->driver->owner);
of_node_put(dev->of_node);
put_device(dev);
mutex_unlock(&region->mutex);
}
/**
* fpga_region_get_manager - get exclusive reference for FPGA manager
* @region: FPGA region
*
* Get FPGA Manager from "fpga-mgr" property or from ancestor region.
*
* Caller should call fpga_mgr_put() when done with manager.
*
* Return: fpga manager struct or IS_ERR() condition containing error code.
*/
static struct fpga_manager *fpga_region_get_manager(struct fpga_region *region)
{
struct device *dev = &region->dev;
struct device_node *np = dev->of_node;
struct device_node *mgr_node;
struct fpga_manager *mgr;
of_node_get(np);
while (np) {
if (of_device_is_compatible(np, "fpga-region")) {
mgr_node = of_parse_phandle(np, "fpga-mgr", 0);
if (mgr_node) {
mgr = of_fpga_mgr_get(mgr_node);
of_node_put(np);
return mgr;
}
}
np = of_get_next_parent(np);
}
of_node_put(np);
return ERR_PTR(-EINVAL);
}
/**
* fpga_region_get_bridges - create a list of bridges
* @region: FPGA region
* @overlay: device node of the overlay
*
* Create a list of bridges including the parent bridge and the bridges
* specified by "fpga-bridges" property. Note that the
* fpga_bridges_enable/disable/put functions are all fine with an empty list
* if that happens.
*
* Caller should call fpga_bridges_put(&region->bridge_list) when
* done with the bridges.
*
* Return 0 for success (even if there are no bridges specified)
* or -EBUSY if any of the bridges are in use.
*/
static int fpga_region_get_bridges(struct fpga_region *region,
struct device_node *overlay)
{
struct device *dev = &region->dev;
struct device_node *region_np = dev->of_node;
struct device_node *br, *np, *parent_br = NULL;
int i, ret;
/* If parent is a bridge, add to list */
ret = fpga_bridge_get_to_list(region_np->parent, region->info,
&region->bridge_list);
if (ret == -EBUSY)
return ret;
if (!ret)
parent_br = region_np->parent;
/* If overlay has a list of bridges, use it. */
if (of_parse_phandle(overlay, "fpga-bridges", 0))
np = overlay;
else
np = region_np;
for (i = 0; ; i++) {
br = of_parse_phandle(np, "fpga-bridges", i);
if (!br)
break;
/* If parent bridge is in list, skip it. */
if (br == parent_br)
continue;
/* If node is a bridge, get it and add to list */
ret = fpga_bridge_get_to_list(br, region->info,
&region->bridge_list);
/* If any of the bridges are in use, give up */
if (ret == -EBUSY) {
fpga_bridges_put(&region->bridge_list);
return -EBUSY;
}
}
return 0;
}
/**
* fpga_region_program_fpga - program FPGA
* @region: FPGA region
* @firmware_name: name of FPGA image firmware file
* @overlay: device node of the overlay
* Program an FPGA using information in the device tree.
* Function assumes that there is a firmware-name property.
* Program an FPGA using fpga image info (region->info).
* Return 0 for success or negative error code.
*/
static int fpga_region_program_fpga(struct fpga_region *region,
const char *firmware_name,
struct device_node *overlay)
int fpga_region_program_fpga(struct fpga_region *region)
{
struct fpga_manager *mgr;
struct device *dev = &region->dev;
struct fpga_image_info *info = region->info;
int ret;
region = fpga_region_get(region);
if (IS_ERR(region)) {
pr_err("failed to get fpga region\n");
dev_err(dev, "failed to get FPGA region\n");
return PTR_ERR(region);
}
mgr = fpga_region_get_manager(region);
if (IS_ERR(mgr)) {
pr_err("failed to get fpga region manager\n");
ret = PTR_ERR(mgr);
ret = fpga_mgr_lock(region->mgr);
if (ret) {
dev_err(dev, "FPGA manager is busy\n");
goto err_put_region;
}
ret = fpga_region_get_bridges(region, overlay);
if (ret) {
pr_err("failed to get fpga region bridges\n");
goto err_put_mgr;
/*
* In some cases, we already have a list of bridges in the
* fpga region struct. Or we don't have any bridges.
*/
if (region->get_bridges) {
ret = region->get_bridges(region);
if (ret) {
dev_err(dev, "failed to get fpga region bridges\n");
goto err_unlock_mgr;
}
}
ret = fpga_bridges_disable(&region->bridge_list);
if (ret) {
pr_err("failed to disable region bridges\n");
dev_err(dev, "failed to disable bridges\n");
goto err_put_br;
}
ret = fpga_mgr_firmware_load(mgr, region->info, firmware_name);
ret = fpga_mgr_load(region->mgr, info);
if (ret) {
pr_err("failed to load fpga image\n");
dev_err(dev, "failed to load FPGA image\n");
goto err_put_br;
}
ret = fpga_bridges_enable(&region->bridge_list);
if (ret) {
pr_err("failed to enable region bridges\n");
dev_err(dev, "failed to enable region bridges\n");
goto err_put_br;
}
fpga_mgr_put(mgr);
fpga_mgr_unlock(region->mgr);
fpga_region_put(region);
return 0;
err_put_br:
fpga_bridges_put(&region->bridge_list);
err_put_mgr:
fpga_mgr_put(mgr);
if (region->get_bridges)
fpga_bridges_put(&region->bridge_list);
err_unlock_mgr:
fpga_mgr_unlock(region->mgr);
err_put_region:
fpga_region_put(region);
return ret;
}
EXPORT_SYMBOL_GPL(fpga_region_program_fpga);
/**
* child_regions_with_firmware
* @overlay: device node of the overlay
*
* If the overlay adds child FPGA regions, they are not allowed to have
* firmware-name property.
*
* Return 0 for OK or -EINVAL if child FPGA region adds firmware-name.
*/
static int child_regions_with_firmware(struct device_node *overlay)
int fpga_region_register(struct device *dev, struct fpga_region *region)
{
struct device_node *child_region;
const char *child_firmware_name;
int ret = 0;
of_node_get(overlay);
child_region = of_find_matching_node(overlay, fpga_region_of_match);
while (child_region) {
if (!of_property_read_string(child_region, "firmware-name",
&child_firmware_name)) {
ret = -EINVAL;
break;
}
child_region = of_find_matching_node(child_region,
fpga_region_of_match);
}
of_node_put(child_region);
if (ret)
pr_err("firmware-name not allowed in child FPGA region: %pOF",
child_region);
return ret;
}
/**
* fpga_region_notify_pre_apply - pre-apply overlay notification
*
* @region: FPGA region that the overlay was applied to
* @nd: overlay notification data
*
* Called after when an overlay targeted to a FPGA Region is about to be
* applied. Function will check the properties that will be added to the FPGA
* region. If the checks pass, it will program the FPGA.
*
* The checks are:
* The overlay must add either firmware-name or external-fpga-config property
* to the FPGA Region.
*
* firmware-name : program the FPGA
* external-fpga-config : FPGA is already programmed
* encrypted-fpga-config : FPGA bitstream is encrypted
*
* The overlay can add other FPGA regions, but child FPGA regions cannot have a
* firmware-name property since those regions don't exist yet.
*
* If the overlay that breaks the rules, notifier returns an error and the
* overlay is rejected before it goes into the main tree.
*
* Returns 0 for success or negative error code for failure.
*/
static int fpga_region_notify_pre_apply(struct fpga_region *region,
struct of_overlay_notify_data *nd)
{
const char *firmware_name = NULL;
struct fpga_image_info *info;
int ret;
info = devm_kzalloc(&region->dev, sizeof(*info), GFP_KERNEL);
if (!info)
return -ENOMEM;
region->info = info;
/* Reject overlay if child FPGA Regions have firmware-name property */
ret = child_regions_with_firmware(nd->overlay);
if (ret)
return ret;
/* Read FPGA region properties from the overlay */
if (of_property_read_bool(nd->overlay, "partial-fpga-config"))
info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
if (of_property_read_bool(nd->overlay, "external-fpga-config"))
info->flags |= FPGA_MGR_EXTERNAL_CONFIG;
if (of_property_read_bool(nd->overlay, "encrypted-fpga-config"))
info->flags |= FPGA_MGR_ENCRYPTED_BITSTREAM;
of_property_read_string(nd->overlay, "firmware-name", &firmware_name);
of_property_read_u32(nd->overlay, "region-unfreeze-timeout-us",
&info->enable_timeout_us);
of_property_read_u32(nd->overlay, "region-freeze-timeout-us",
&info->disable_timeout_us);
of_property_read_u32(nd->overlay, "config-complete-timeout-us",
&info->config_complete_timeout_us);
/* If FPGA was externally programmed, don't specify firmware */
if ((info->flags & FPGA_MGR_EXTERNAL_CONFIG) && firmware_name) {
pr_err("error: specified firmware and external-fpga-config");
return -EINVAL;
}
/* FPGA is already configured externally. We're done. */
if (info->flags & FPGA_MGR_EXTERNAL_CONFIG)
return 0;
/* If we got this far, we should be programming the FPGA */
if (!firmware_name) {
pr_err("should specify firmware-name or external-fpga-config\n");
return -EINVAL;
}
return fpga_region_program_fpga(region, firmware_name, nd->overlay);
}
/**
* fpga_region_notify_post_remove - post-remove overlay notification
*
* @region: FPGA region that was targeted by the overlay that was removed
* @nd: overlay notification data
*
* Called after an overlay has been removed if the overlay's target was a
* FPGA region.
*/
static void fpga_region_notify_post_remove(struct fpga_region *region,
struct of_overlay_notify_data *nd)
{
fpga_bridges_disable(&region->bridge_list);
fpga_bridges_put(&region->bridge_list);
devm_kfree(&region->dev, region->info);
region->info = NULL;
}
/**
* of_fpga_region_notify - reconfig notifier for dynamic DT changes
* @nb: notifier block
* @action: notifier action
* @arg: reconfig data
*
* This notifier handles programming a FPGA when a "firmware-name" property is
* added to a fpga-region.
*
* Returns NOTIFY_OK or error if FPGA programming fails.
*/
static int of_fpga_region_notify(struct notifier_block *nb,
unsigned long action, void *arg)
{
struct of_overlay_notify_data *nd = arg;
struct fpga_region *region;
int ret;
switch (action) {
case OF_OVERLAY_PRE_APPLY:
pr_debug("%s OF_OVERLAY_PRE_APPLY\n", __func__);
break;
case OF_OVERLAY_POST_APPLY:
pr_debug("%s OF_OVERLAY_POST_APPLY\n", __func__);
return NOTIFY_OK; /* not for us */
case OF_OVERLAY_PRE_REMOVE:
pr_debug("%s OF_OVERLAY_PRE_REMOVE\n", __func__);
return NOTIFY_OK; /* not for us */
case OF_OVERLAY_POST_REMOVE:
pr_debug("%s OF_OVERLAY_POST_REMOVE\n", __func__);
break;
default: /* should not happen */
return NOTIFY_OK;
}
region = fpga_region_find(nd->target);
if (!region)
return NOTIFY_OK;
ret = 0;
switch (action) {
case OF_OVERLAY_PRE_APPLY:
ret = fpga_region_notify_pre_apply(region, nd);
break;
case OF_OVERLAY_POST_REMOVE:
fpga_region_notify_post_remove(region, nd);
break;
}
put_device(&region->dev);
if (ret)
return notifier_from_errno(ret);
return NOTIFY_OK;
}
static struct notifier_block fpga_region_of_nb = {
.notifier_call = of_fpga_region_notify,
};
static int fpga_region_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct fpga_region *region;
int id, ret = 0;
region = kzalloc(sizeof(*region), GFP_KERNEL);
if (!region)
return -ENOMEM;
id = ida_simple_get(&fpga_region_ida, 0, 0, GFP_KERNEL);
if (id < 0) {
ret = id;
goto err_kfree;
}
if (id < 0)
return id;
mutex_init(&region->mutex);
INIT_LIST_HEAD(&region->bridge_list);
device_initialize(&region->dev);
region->dev.groups = region->groups;
region->dev.class = fpga_region_class;
region->dev.parent = dev;
region->dev.of_node = np;
region->dev.of_node = dev->of_node;
region->dev.id = id;
dev_set_drvdata(dev, region);
@ -524,44 +188,27 @@ static int fpga_region_probe(struct platform_device *pdev)
if (ret)
goto err_remove;
of_platform_populate(np, fpga_region_of_match, NULL, &region->dev);
dev_info(dev, "FPGA Region probed\n");
return 0;
err_remove:
ida_simple_remove(&fpga_region_ida, id);
err_kfree:
kfree(region);
return ret;
}
EXPORT_SYMBOL_GPL(fpga_region_register);
static int fpga_region_remove(struct platform_device *pdev)
int fpga_region_unregister(struct fpga_region *region)
{
struct fpga_region *region = platform_get_drvdata(pdev);
device_unregister(&region->dev);
return 0;
}
static struct platform_driver fpga_region_driver = {
.probe = fpga_region_probe,
.remove = fpga_region_remove,
.driver = {
.name = "fpga-region",
.of_match_table = of_match_ptr(fpga_region_of_match),
},
};
EXPORT_SYMBOL_GPL(fpga_region_unregister);
static void fpga_region_dev_release(struct device *dev)
{
struct fpga_region *region = to_fpga_region(dev);
ida_simple_remove(&fpga_region_ida, region->dev.id);
kfree(region);
}
/**
@ -570,36 +217,17 @@ static void fpga_region_dev_release(struct device *dev)
*/
static int __init fpga_region_init(void)
{
int ret;
fpga_region_class = class_create(THIS_MODULE, "fpga_region");
if (IS_ERR(fpga_region_class))
return PTR_ERR(fpga_region_class);
fpga_region_class->dev_release = fpga_region_dev_release;
ret = of_overlay_notifier_register(&fpga_region_of_nb);
if (ret)
goto err_class;
ret = platform_driver_register(&fpga_region_driver);
if (ret)
goto err_plat;
return 0;
err_plat:
of_overlay_notifier_unregister(&fpga_region_of_nb);
err_class:
class_destroy(fpga_region_class);
ida_destroy(&fpga_region_ida);
return ret;
}
static void __exit fpga_region_exit(void)
{
platform_driver_unregister(&fpga_region_driver);
of_overlay_notifier_unregister(&fpga_region_of_nb);
class_destroy(fpga_region_class);
ida_destroy(&fpga_region_ida);
}
@ -608,5 +236,5 @@ subsys_initcall(fpga_region_init);
module_exit(fpga_region_exit);
MODULE_DESCRIPTION("FPGA Region");
MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>");
MODULE_AUTHOR("Alan Tull <atull@kernel.org>");
MODULE_LICENSE("GPL v2");

View File

@ -0,0 +1,504 @@
/*
* FPGA Region - Device Tree support for FPGA programming under Linux
*
* Copyright (C) 2013-2016 Altera Corporation
* Copyright (C) 2017 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/fpga/fpga-bridge.h>
#include <linux/fpga/fpga-mgr.h>
#include <linux/fpga/fpga-region.h>
#include <linux/idr.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/module.h>
#include <linux/of_platform.h>
#include <linux/slab.h>
#include <linux/spinlock.h>
static const struct of_device_id fpga_region_of_match[] = {
{ .compatible = "fpga-region", },
{},
};
MODULE_DEVICE_TABLE(of, fpga_region_of_match);
static int fpga_region_of_node_match(struct device *dev, const void *data)
{
return dev->of_node == data;
}
/**
* of_fpga_region_find - find FPGA region
* @np: device node of FPGA Region
*
* Caller will need to put_device(&region->dev) when done.
*
* Returns FPGA Region struct or NULL
*/
static struct fpga_region *of_fpga_region_find(struct device_node *np)
{
return fpga_region_class_find(NULL, np, fpga_region_of_node_match);
}
/**
* of_fpga_region_get_mgr - get reference for FPGA manager
* @np: device node of FPGA region
*
* Get FPGA Manager from "fpga-mgr" property or from ancestor region.
*
* Caller should call fpga_mgr_put() when done with manager.
*
* Return: fpga manager struct or IS_ERR() condition containing error code.
*/
static struct fpga_manager *of_fpga_region_get_mgr(struct device_node *np)
{
struct device_node *mgr_node;
struct fpga_manager *mgr;
of_node_get(np);
while (np) {
if (of_device_is_compatible(np, "fpga-region")) {
mgr_node = of_parse_phandle(np, "fpga-mgr", 0);
if (mgr_node) {
mgr = of_fpga_mgr_get(mgr_node);
of_node_put(mgr_node);
of_node_put(np);
return mgr;
}
}
np = of_get_next_parent(np);
}
of_node_put(np);
return ERR_PTR(-EINVAL);
}
/**
* of_fpga_region_get_bridges - create a list of bridges
* @region: FPGA region
*
* Create a list of bridges including the parent bridge and the bridges
* specified by "fpga-bridges" property. Note that the
* fpga_bridges_enable/disable/put functions are all fine with an empty list
* if that happens.
*
* Caller should call fpga_bridges_put(&region->bridge_list) when
* done with the bridges.
*
* Return 0 for success (even if there are no bridges specified)
* or -EBUSY if any of the bridges are in use.
*/
static int of_fpga_region_get_bridges(struct fpga_region *region)
{
struct device *dev = &region->dev;
struct device_node *region_np = dev->of_node;
struct fpga_image_info *info = region->info;
struct device_node *br, *np, *parent_br = NULL;
int i, ret;
/* If parent is a bridge, add to list */
ret = of_fpga_bridge_get_to_list(region_np->parent, info,
&region->bridge_list);
/* -EBUSY means parent is a bridge that is under use. Give up. */
if (ret == -EBUSY)
return ret;
/* Zero return code means parent was a bridge and was added to list. */
if (!ret)
parent_br = region_np->parent;
/* If overlay has a list of bridges, use it. */
br = of_parse_phandle(info->overlay, "fpga-bridges", 0);
if (br) {
of_node_put(br);
np = info->overlay;
} else {
np = region_np;
}
for (i = 0; ; i++) {
br = of_parse_phandle(np, "fpga-bridges", i);
if (!br)
break;
/* If parent bridge is in list, skip it. */
if (br == parent_br) {
of_node_put(br);
continue;
}
/* If node is a bridge, get it and add to list */
ret = of_fpga_bridge_get_to_list(br, info,
&region->bridge_list);
of_node_put(br);
/* If any of the bridges are in use, give up */
if (ret == -EBUSY) {
fpga_bridges_put(&region->bridge_list);
return -EBUSY;
}
}
return 0;
}
/**
* child_regions_with_firmware
* @overlay: device node of the overlay
*
* If the overlay adds child FPGA regions, they are not allowed to have
* firmware-name property.
*
* Return 0 for OK or -EINVAL if child FPGA region adds firmware-name.
*/
static int child_regions_with_firmware(struct device_node *overlay)
{
struct device_node *child_region;
const char *child_firmware_name;
int ret = 0;
of_node_get(overlay);
child_region = of_find_matching_node(overlay, fpga_region_of_match);
while (child_region) {
if (!of_property_read_string(child_region, "firmware-name",
&child_firmware_name)) {
ret = -EINVAL;
break;
}
child_region = of_find_matching_node(child_region,
fpga_region_of_match);
}
of_node_put(child_region);
if (ret)
pr_err("firmware-name not allowed in child FPGA region: %pOF",
child_region);
return ret;
}
/**
* of_fpga_region_parse_ov - parse and check overlay applied to region
*
* @region: FPGA region
* @overlay: overlay applied to the FPGA region
*
* Given an overlay applied to a FPGA region, parse the FPGA image specific
* info in the overlay and do some checking.
*
* Returns:
* NULL if overlay doesn't direct us to program the FPGA.
* fpga_image_info struct if there is an image to program.
* error code for invalid overlay.
*/
static struct fpga_image_info *of_fpga_region_parse_ov(
struct fpga_region *region,
struct device_node *overlay)
{
struct device *dev = &region->dev;
struct fpga_image_info *info;
const char *firmware_name;
int ret;
if (region->info) {
dev_err(dev, "Region already has overlay applied.\n");
return ERR_PTR(-EINVAL);
}
/*
* Reject overlay if child FPGA Regions added in the overlay have
* firmware-name property (would mean that an FPGA region that has
* not been added to the live tree yet is doing FPGA programming).
*/
ret = child_regions_with_firmware(overlay);
if (ret)
return ERR_PTR(ret);
info = fpga_image_info_alloc(dev);
if (!info)
return ERR_PTR(-ENOMEM);
info->overlay = overlay;
/* Read FPGA region properties from the overlay */
if (of_property_read_bool(overlay, "partial-fpga-config"))
info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
if (of_property_read_bool(overlay, "external-fpga-config"))
info->flags |= FPGA_MGR_EXTERNAL_CONFIG;
if (of_property_read_bool(overlay, "encrypted-fpga-config"))
info->flags |= FPGA_MGR_ENCRYPTED_BITSTREAM;
if (!of_property_read_string(overlay, "firmware-name",
&firmware_name)) {
info->firmware_name = devm_kstrdup(dev, firmware_name,
GFP_KERNEL);
if (!info->firmware_name)
return ERR_PTR(-ENOMEM);
}
of_property_read_u32(overlay, "region-unfreeze-timeout-us",
&info->enable_timeout_us);
of_property_read_u32(overlay, "region-freeze-timeout-us",
&info->disable_timeout_us);
of_property_read_u32(overlay, "config-complete-timeout-us",
&info->config_complete_timeout_us);
/* If overlay is not programming the FPGA, don't need FPGA image info */
if (!info->firmware_name) {
ret = 0;
goto ret_no_info;
}
/*
* If overlay informs us FPGA was externally programmed, specifying
* firmware here would be ambiguous.
*/
if (info->flags & FPGA_MGR_EXTERNAL_CONFIG) {
dev_err(dev, "error: specified firmware and external-fpga-config");
ret = -EINVAL;
goto ret_no_info;
}
return info;
ret_no_info:
fpga_image_info_free(info);
return ERR_PTR(ret);
}
/**
* of_fpga_region_notify_pre_apply - pre-apply overlay notification
*
* @region: FPGA region that the overlay was applied to
* @nd: overlay notification data
*
* Called when an overlay targeted to a FPGA Region is about to be applied.
* Parses the overlay for properties that influence how the FPGA will be
* programmed and does some checking. If the checks pass, programs the FPGA.
* If the checks fail, overlay is rejected and does not get added to the
* live tree.
*
* Returns 0 for success or negative error code for failure.
*/
static int of_fpga_region_notify_pre_apply(struct fpga_region *region,
struct of_overlay_notify_data *nd)
{
struct device *dev = &region->dev;
struct fpga_image_info *info;
int ret;
info = of_fpga_region_parse_ov(region, nd->overlay);
if (IS_ERR(info))
return PTR_ERR(info);
/* If overlay doesn't program the FPGA, accept it anyway. */
if (!info)
return 0;
if (region->info) {
dev_err(dev, "Region already has overlay applied.\n");
return -EINVAL;
}
region->info = info;
ret = fpga_region_program_fpga(region);
if (ret) {
/* error; reject overlay */
fpga_image_info_free(info);
region->info = NULL;
}
return ret;
}
/**
* of_fpga_region_notify_post_remove - post-remove overlay notification
*
* @region: FPGA region that was targeted by the overlay that was removed
* @nd: overlay notification data
*
* Called after an overlay has been removed if the overlay's target was a
* FPGA region.
*/
static void of_fpga_region_notify_post_remove(struct fpga_region *region,
struct of_overlay_notify_data *nd)
{
fpga_bridges_disable(&region->bridge_list);
fpga_bridges_put(&region->bridge_list);
fpga_image_info_free(region->info);
region->info = NULL;
}
/**
* of_fpga_region_notify - reconfig notifier for dynamic DT changes
* @nb: notifier block
* @action: notifier action
* @arg: reconfig data
*
* This notifier handles programming a FPGA when a "firmware-name" property is
* added to a fpga-region.
*
* Returns NOTIFY_OK or error if FPGA programming fails.
*/
static int of_fpga_region_notify(struct notifier_block *nb,
unsigned long action, void *arg)
{
struct of_overlay_notify_data *nd = arg;
struct fpga_region *region;
int ret;
switch (action) {
case OF_OVERLAY_PRE_APPLY:
pr_debug("%s OF_OVERLAY_PRE_APPLY\n", __func__);
break;
case OF_OVERLAY_POST_APPLY:
pr_debug("%s OF_OVERLAY_POST_APPLY\n", __func__);
return NOTIFY_OK; /* not for us */
case OF_OVERLAY_PRE_REMOVE:
pr_debug("%s OF_OVERLAY_PRE_REMOVE\n", __func__);
return NOTIFY_OK; /* not for us */
case OF_OVERLAY_POST_REMOVE:
pr_debug("%s OF_OVERLAY_POST_REMOVE\n", __func__);
break;
default: /* should not happen */
return NOTIFY_OK;
}
region = of_fpga_region_find(nd->target);
if (!region)
return NOTIFY_OK;
ret = 0;
switch (action) {
case OF_OVERLAY_PRE_APPLY:
ret = of_fpga_region_notify_pre_apply(region, nd);
break;
case OF_OVERLAY_POST_REMOVE:
of_fpga_region_notify_post_remove(region, nd);
break;
}
put_device(&region->dev);
if (ret)
return notifier_from_errno(ret);
return NOTIFY_OK;
}
static struct notifier_block fpga_region_of_nb = {
.notifier_call = of_fpga_region_notify,
};
static int of_fpga_region_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct fpga_region *region;
struct fpga_manager *mgr;
int ret;
/* Find the FPGA mgr specified by region or parent region. */
mgr = of_fpga_region_get_mgr(np);
if (IS_ERR(mgr))
return -EPROBE_DEFER;
region = devm_kzalloc(dev, sizeof(*region), GFP_KERNEL);
if (!region) {
ret = -ENOMEM;
goto eprobe_mgr_put;
}
region->mgr = mgr;
/* Specify how to get bridges for this type of region. */
region->get_bridges = of_fpga_region_get_bridges;
ret = fpga_region_register(dev, region);
if (ret)
goto eprobe_mgr_put;
of_platform_populate(np, fpga_region_of_match, NULL, &region->dev);
dev_info(dev, "FPGA Region probed\n");
return 0;
eprobe_mgr_put:
fpga_mgr_put(mgr);
return ret;
}
static int of_fpga_region_remove(struct platform_device *pdev)
{
struct fpga_region *region = platform_get_drvdata(pdev);
fpga_region_unregister(region);
fpga_mgr_put(region->mgr);
return 0;
}
static struct platform_driver of_fpga_region_driver = {
.probe = of_fpga_region_probe,
.remove = of_fpga_region_remove,
.driver = {
.name = "of-fpga-region",
.of_match_table = of_match_ptr(fpga_region_of_match),
},
};
/**
* fpga_region_init - init function for fpga_region class
* Creates the fpga_region class and registers a reconfig notifier.
*/
static int __init of_fpga_region_init(void)
{
int ret;
ret = of_overlay_notifier_register(&fpga_region_of_nb);
if (ret)
return ret;
ret = platform_driver_register(&of_fpga_region_driver);
if (ret)
goto err_plat;
return 0;
err_plat:
of_overlay_notifier_unregister(&fpga_region_of_nb);
return ret;
}
static void __exit of_fpga_region_exit(void)
{
platform_driver_unregister(&of_fpga_region_driver);
of_overlay_notifier_unregister(&fpga_region_of_nb);
}
subsys_initcall(of_fpga_region_init);
module_exit(of_fpga_region_exit);
MODULE_DESCRIPTION("FPGA Region");
MODULE_AUTHOR("Alan Tull <atull@kernel.org>");
MODULE_LICENSE("GPL v2");

View File

@ -519,8 +519,14 @@ static int socfpga_a10_fpga_probe(struct platform_device *pdev)
return -EBUSY;
}
return fpga_mgr_register(dev, "SoCFPGA Arria10 FPGA Manager",
ret = fpga_mgr_register(dev, "SoCFPGA Arria10 FPGA Manager",
&socfpga_a10_fpga_mgr_ops, priv);
if (ret) {
clk_disable_unprepare(priv->clk);
return ret;
}
return 0;
}
static int socfpga_a10_fpga_remove(struct platform_device *pdev)

View File

@ -2,9 +2,7 @@
# FSI subsystem
#
menu "FSI support"
config FSI
menuconfig FSI
tristate "FSI support"
select CRC4
---help---
@ -34,5 +32,3 @@ config FSI_SCOM
This option enables an FSI based SCOM device driver.
endif
endmenu

View File

@ -49,9 +49,6 @@ struct hv_context hv_context = {
*/
int hv_init(void)
{
if (!hv_is_hypercall_page_setup())
return -ENOTSUPP;
hv_context.cpu_context = alloc_percpu(struct hv_per_cpu_context);
if (!hv_context.cpu_context)
return -ENOMEM;

View File

@ -37,7 +37,6 @@
#include <linux/sched/task_stack.h>
#include <asm/hyperv.h>
#include <asm/hypervisor.h>
#include <asm/mshyperv.h>
#include <linux/notifier.h>
#include <linux/ptrace.h>
@ -1053,7 +1052,7 @@ static int vmbus_bus_init(void)
* Initialize the per-cpu interrupt state and
* connect to the host.
*/
ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/hyperv:online",
ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "hyperv/vmbus:online",
hv_synic_init, hv_synic_cleanup);
if (ret < 0)
goto err_alloc;
@ -1193,7 +1192,7 @@ static ssize_t out_mask_show(const struct vmbus_channel *channel, char *buf)
return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask);
}
VMBUS_CHAN_ATTR_RO(out_mask);
static VMBUS_CHAN_ATTR_RO(out_mask);
static ssize_t in_mask_show(const struct vmbus_channel *channel, char *buf)
{
@ -1201,7 +1200,7 @@ static ssize_t in_mask_show(const struct vmbus_channel *channel, char *buf)
return sprintf(buf, "%u\n", rbi->ring_buffer->interrupt_mask);
}
VMBUS_CHAN_ATTR_RO(in_mask);
static VMBUS_CHAN_ATTR_RO(in_mask);
static ssize_t read_avail_show(const struct vmbus_channel *channel, char *buf)
{
@ -1209,7 +1208,7 @@ static ssize_t read_avail_show(const struct vmbus_channel *channel, char *buf)
return sprintf(buf, "%u\n", hv_get_bytes_to_read(rbi));
}
VMBUS_CHAN_ATTR_RO(read_avail);
static VMBUS_CHAN_ATTR_RO(read_avail);
static ssize_t write_avail_show(const struct vmbus_channel *channel, char *buf)
{
@ -1217,13 +1216,13 @@ static ssize_t write_avail_show(const struct vmbus_channel *channel, char *buf)
return sprintf(buf, "%u\n", hv_get_bytes_to_write(rbi));
}
VMBUS_CHAN_ATTR_RO(write_avail);
static VMBUS_CHAN_ATTR_RO(write_avail);
static ssize_t show_target_cpu(const struct vmbus_channel *channel, char *buf)
{
return sprintf(buf, "%u\n", channel->target_cpu);
}
VMBUS_CHAN_ATTR(cpu, S_IRUGO, show_target_cpu, NULL);
static VMBUS_CHAN_ATTR(cpu, S_IRUGO, show_target_cpu, NULL);
static ssize_t channel_pending_show(const struct vmbus_channel *channel,
char *buf)
@ -1232,7 +1231,7 @@ static ssize_t channel_pending_show(const struct vmbus_channel *channel,
channel_pending(channel,
vmbus_connection.monitor_pages[1]));
}
VMBUS_CHAN_ATTR(pending, S_IRUGO, channel_pending_show, NULL);
static VMBUS_CHAN_ATTR(pending, S_IRUGO, channel_pending_show, NULL);
static ssize_t channel_latency_show(const struct vmbus_channel *channel,
char *buf)
@ -1241,19 +1240,34 @@ static ssize_t channel_latency_show(const struct vmbus_channel *channel,
channel_latency(channel,
vmbus_connection.monitor_pages[1]));
}
VMBUS_CHAN_ATTR(latency, S_IRUGO, channel_latency_show, NULL);
static VMBUS_CHAN_ATTR(latency, S_IRUGO, channel_latency_show, NULL);
static ssize_t channel_interrupts_show(const struct vmbus_channel *channel, char *buf)
{
return sprintf(buf, "%llu\n", channel->interrupts);
}
VMBUS_CHAN_ATTR(interrupts, S_IRUGO, channel_interrupts_show, NULL);
static VMBUS_CHAN_ATTR(interrupts, S_IRUGO, channel_interrupts_show, NULL);
static ssize_t channel_events_show(const struct vmbus_channel *channel, char *buf)
{
return sprintf(buf, "%llu\n", channel->sig_events);
}
VMBUS_CHAN_ATTR(events, S_IRUGO, channel_events_show, NULL);
static VMBUS_CHAN_ATTR(events, S_IRUGO, channel_events_show, NULL);
static ssize_t subchannel_monitor_id_show(const struct vmbus_channel *channel,
char *buf)
{
return sprintf(buf, "%u\n", channel->offermsg.monitorid);
}
static VMBUS_CHAN_ATTR(monitor_id, S_IRUGO, subchannel_monitor_id_show, NULL);
static ssize_t subchannel_id_show(const struct vmbus_channel *channel,
char *buf)
{
return sprintf(buf, "%u\n",
channel->offermsg.offer.sub_channel_index);
}
static VMBUS_CHAN_ATTR_RO(subchannel_id);
static struct attribute *vmbus_chan_attrs[] = {
&chan_attr_out_mask.attr,
@ -1265,6 +1279,8 @@ static struct attribute *vmbus_chan_attrs[] = {
&chan_attr_latency.attr,
&chan_attr_interrupts.attr,
&chan_attr_events.attr,
&chan_attr_monitor_id.attr,
&chan_attr_subchannel_id.attr,
NULL
};
@ -1717,7 +1733,7 @@ static int __init hv_acpi_init(void)
{
int ret, t;
if (x86_hyper_type != X86_HYPER_MS_HYPERV)
if (!hv_is_hyperv_initialized())
return -ENODEV;
init_completion(&probe_event);

View File

@ -163,10 +163,8 @@ static int replicator_probe(struct amba_device *adev, const struct amba_id *id)
desc.dev = &adev->dev;
desc.groups = replicator_groups;
drvdata->csdev = coresight_register(&desc);
if (IS_ERR(drvdata->csdev))
return PTR_ERR(drvdata->csdev);
return 0;
return PTR_ERR_OR_ZERO(drvdata->csdev);
}
#ifdef CONFIG_PM

View File

@ -33,7 +33,6 @@
#include <linux/mm.h>
#include <linux/perf_event.h>
#include <asm/local.h>
#include "coresight-priv.h"

View File

@ -214,10 +214,8 @@ static int funnel_probe(struct amba_device *adev, const struct amba_id *id)
desc.dev = dev;
desc.groups = coresight_funnel_groups;
drvdata->csdev = coresight_register(&desc);
if (IS_ERR(drvdata->csdev))
return PTR_ERR(drvdata->csdev);
return 0;
return PTR_ERR_OR_ZERO(drvdata->csdev);
}
#ifdef CONFIG_PM

View File

@ -46,8 +46,11 @@
#define TPIU_ITATBCTR0 0xef8
/** register definition **/
/* FFSR - 0x300 */
#define FFSR_FT_STOPPED BIT(1)
/* FFCR - 0x304 */
#define FFCR_FON_MAN BIT(6)
#define FFCR_STOP_FI BIT(12)
/**
* @base: memory mapped base address for this component.
@ -85,10 +88,14 @@ static void tpiu_disable_hw(struct tpiu_drvdata *drvdata)
{
CS_UNLOCK(drvdata->base);
/* Clear formatter controle reg. */
writel_relaxed(0x0, drvdata->base + TPIU_FFCR);
/* Clear formatter and stop on flush */
writel_relaxed(FFCR_STOP_FI, drvdata->base + TPIU_FFCR);
/* Generate manual flush */
writel_relaxed(FFCR_FON_MAN, drvdata->base + TPIU_FFCR);
writel_relaxed(FFCR_STOP_FI | FFCR_FON_MAN, drvdata->base + TPIU_FFCR);
/* Wait for flush to complete */
coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN, 0);
/* Wait for formatter to stop */
coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED, 1);
CS_LOCK(drvdata->base);
}
@ -160,10 +167,8 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id)
desc.pdata = pdata;
desc.dev = dev;
drvdata->csdev = coresight_register(&desc);
if (IS_ERR(drvdata->csdev))
return PTR_ERR(drvdata->csdev);
return 0;
return PTR_ERR_OR_ZERO(drvdata->csdev);
}
#ifdef CONFIG_PM

View File

@ -843,32 +843,17 @@ static void coresight_fixup_orphan_conns(struct coresight_device *csdev)
}
static int coresight_name_match(struct device *dev, void *data)
{
char *to_match;
struct coresight_device *i_csdev;
to_match = data;
i_csdev = to_coresight_device(dev);
if (to_match && !strcmp(to_match, dev_name(&i_csdev->dev)))
return 1;
return 0;
}
static void coresight_fixup_device_conns(struct coresight_device *csdev)
{
int i;
struct device *dev = NULL;
struct coresight_connection *conn;
for (i = 0; i < csdev->nr_outport; i++) {
conn = &csdev->conns[i];
dev = bus_find_device(&coresight_bustype, NULL,
(void *)conn->child_name,
coresight_name_match);
struct coresight_connection *conn = &csdev->conns[i];
struct device *dev = NULL;
if (conn->child_name)
dev = bus_find_device_by_name(&coresight_bustype, NULL,
conn->child_name);
if (dev) {
conn->child_dev = to_coresight_device(dev);
/* and put reference from 'bus_find_device()' */

View File

@ -53,7 +53,7 @@ config AD525X_DPOT_SPI
config ATMEL_TCLIB
bool "Atmel AT32/AT91 Timer/Counter Library"
depends on (AVR32 || ARCH_AT91)
depends on ARCH_AT91
help
Select this if you want a library to allocate the Timer/Counter
blocks found on many Atmel processors. This facilitates using
@ -192,7 +192,7 @@ config ICS932S401
config ATMEL_SSC
tristate "Device driver for Atmel SSC peripheral"
depends on HAS_IOMEM && (AVR32 || ARCH_AT91 || COMPILE_TEST)
depends on HAS_IOMEM && (ARCH_AT91 || COMPILE_TEST)
---help---
This option enables device driver support for Atmel Synchronized
Serial Communication peripheral (SSC).

View File

@ -3,7 +3,7 @@
* Copyright (c) 2009-2010 Analog Devices, Inc.
* Author: Michael Hennerich <hennerich@blackfin.uclinux.org>
*
* DEVID #Wipers #Positions Resistor Options (kOhm)
* DEVID #Wipers #Positions Resistor Options (kOhm)
* AD5258 1 64 1, 10, 50, 100
* AD5259 1 256 5, 10, 50, 100
* AD5251 2 64 1, 10, 50, 100
@ -84,12 +84,12 @@
struct dpot_data {
struct ad_dpot_bus_data bdata;
struct mutex update_lock;
unsigned rdac_mask;
unsigned max_pos;
unsigned int rdac_mask;
unsigned int max_pos;
unsigned long devid;
unsigned uid;
unsigned feat;
unsigned wipers;
unsigned int uid;
unsigned int feat;
unsigned int wipers;
u16 rdac_cache[MAX_RDACS];
DECLARE_BITMAP(otp_en_mask, MAX_RDACS);
};
@ -126,7 +126,7 @@ static inline int dpot_write_r8d16(struct dpot_data *dpot, u8 reg, u16 val)
static s32 dpot_read_spi(struct dpot_data *dpot, u8 reg)
{
unsigned ctrl = 0;
unsigned int ctrl = 0;
int value;
if (!(reg & (DPOT_ADDR_EEPROM | DPOT_ADDR_CMD))) {
@ -175,7 +175,7 @@ static s32 dpot_read_spi(struct dpot_data *dpot, u8 reg)
static s32 dpot_read_i2c(struct dpot_data *dpot, u8 reg)
{
int value;
unsigned ctrl = 0;
unsigned int ctrl = 0;
switch (dpot->uid) {
case DPOT_UID(AD5246_ID):
@ -238,7 +238,7 @@ static s32 dpot_read(struct dpot_data *dpot, u8 reg)
static s32 dpot_write_spi(struct dpot_data *dpot, u8 reg, u16 value)
{
unsigned val = 0;
unsigned int val = 0;
if (!(reg & (DPOT_ADDR_EEPROM | DPOT_ADDR_CMD | DPOT_ADDR_OTP))) {
if (dpot->feat & F_RDACS_WONLY)
@ -328,7 +328,7 @@ static s32 dpot_write_spi(struct dpot_data *dpot, u8 reg, u16 value)
static s32 dpot_write_i2c(struct dpot_data *dpot, u8 reg, u16 value)
{
/* Only write the instruction byte for certain commands */
unsigned tmp = 0, ctrl = 0;
unsigned int tmp = 0, ctrl = 0;
switch (dpot->uid) {
case DPOT_UID(AD5246_ID):
@ -515,11 +515,11 @@ set_##_name(struct device *dev, \
#define DPOT_DEVICE_SHOW_SET(name, reg) \
DPOT_DEVICE_SHOW(name, reg) \
DPOT_DEVICE_SET(name, reg) \
static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, set_##name);
static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, set_##name)
#define DPOT_DEVICE_SHOW_ONLY(name, reg) \
DPOT_DEVICE_SHOW(name, reg) \
static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, NULL);
static DEVICE_ATTR(name, S_IWUSR | S_IRUGO, show_##name, NULL)
DPOT_DEVICE_SHOW_SET(rdac0, DPOT_ADDR_RDAC | DPOT_RDAC0);
DPOT_DEVICE_SHOW_SET(eeprom0, DPOT_ADDR_EEPROM | DPOT_RDAC0);
@ -616,7 +616,7 @@ set_##_name(struct device *dev, \
{ \
return sysfs_do_cmd(dev, attr, buf, count, _cmd); \
} \
static DEVICE_ATTR(_name, S_IWUSR | S_IRUGO, NULL, set_##_name);
static DEVICE_ATTR(_name, S_IWUSR | S_IRUGO, NULL, set_##_name)
DPOT_DEVICE_DO_CMD(inc_all, DPOT_INC_ALL);
DPOT_DEVICE_DO_CMD(dec_all, DPOT_DEC_ALL);
@ -636,7 +636,7 @@ static const struct attribute_group ad525x_group_commands = {
};
static int ad_dpot_add_files(struct device *dev,
unsigned features, unsigned rdac)
unsigned int features, unsigned int rdac)
{
int err = sysfs_create_file(&dev->kobj,
dpot_attrib_wipers[rdac]);
@ -661,7 +661,7 @@ static int ad_dpot_add_files(struct device *dev,
}
static inline void ad_dpot_remove_files(struct device *dev,
unsigned features, unsigned rdac)
unsigned int features, unsigned int rdac)
{
sysfs_remove_file(&dev->kobj,
dpot_attrib_wipers[rdac]);

View File

@ -195,12 +195,12 @@ enum dpot_devid {
struct dpot_data;
struct ad_dpot_bus_ops {
int (*read_d8) (void *client);
int (*read_r8d8) (void *client, u8 reg);
int (*read_r8d16) (void *client, u8 reg);
int (*write_d8) (void *client, u8 val);
int (*write_r8d8) (void *client, u8 reg, u8 val);
int (*write_r8d16) (void *client, u8 reg, u16 val);
int (*read_d8)(void *client);
int (*read_r8d8)(void *client, u8 reg);
int (*read_r8d16)(void *client, u8 reg);
int (*write_d8)(void *client, u8 val);
int (*write_r8d8)(void *client, u8 reg, u8 val);
int (*write_r8d16)(void *client, u8 reg, u16 val);
};
struct ad_dpot_bus_data {

View File

@ -715,6 +715,7 @@ static ssize_t apds990x_rate_avail(struct device *dev,
{
int i;
int pos = 0;
for (i = 0; i < ARRAY_SIZE(arates_hz); i++)
pos += sprintf(buf + pos, "%d ", arates_hz[i]);
sprintf(buf + pos - 1, "\n");
@ -725,6 +726,7 @@ static ssize_t apds990x_rate_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct apds990x_chip *chip = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", chip->arate);
}
@ -784,6 +786,7 @@ static ssize_t apds990x_prox_show(struct device *dev,
{
ssize_t ret;
struct apds990x_chip *chip = dev_get_drvdata(dev);
if (pm_runtime_suspended(dev) || !chip->prox_en)
return -EIO;
@ -807,6 +810,7 @@ static ssize_t apds990x_prox_enable_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct apds990x_chip *chip = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", chip->prox_en);
}
@ -847,6 +851,7 @@ static ssize_t apds990x_prox_reporting_mode_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct apds990x_chip *chip = dev_get_drvdata(dev);
return sprintf(buf, "%s\n",
reporting_modes[!!chip->prox_continuous_mode]);
}
@ -884,6 +889,7 @@ static ssize_t apds990x_lux_thresh_above_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct apds990x_chip *chip = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", chip->lux_thres_hi);
}
@ -891,6 +897,7 @@ static ssize_t apds990x_lux_thresh_below_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct apds990x_chip *chip = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", chip->lux_thres_lo);
}
@ -926,6 +933,7 @@ static ssize_t apds990x_lux_thresh_above_store(struct device *dev,
{
struct apds990x_chip *chip = dev_get_drvdata(dev);
int ret = apds990x_set_lux_thresh(chip, &chip->lux_thres_hi, buf);
if (ret < 0)
return ret;
return len;
@ -937,6 +945,7 @@ static ssize_t apds990x_lux_thresh_below_store(struct device *dev,
{
struct apds990x_chip *chip = dev_get_drvdata(dev);
int ret = apds990x_set_lux_thresh(chip, &chip->lux_thres_lo, buf);
if (ret < 0)
return ret;
return len;
@ -954,6 +963,7 @@ static ssize_t apds990x_prox_threshold_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct apds990x_chip *chip = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", chip->prox_thres);
}
@ -1026,6 +1036,7 @@ static ssize_t apds990x_chip_id_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct apds990x_chip *chip = dev_get_drvdata(dev);
return sprintf(buf, "%s %d\n", chip->chipname, chip->revision);
}

View File

@ -59,25 +59,42 @@ static ssize_t ds1682_show(struct device *dev, struct device_attribute *attr,
{
struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr);
struct i2c_client *client = to_i2c_client(dev);
__le32 val = 0;
unsigned long long val, check;
__le32 val_le = 0;
int rc;
dev_dbg(dev, "ds1682_show() called on %s\n", attr->attr.name);
/* Read the register */
rc = i2c_smbus_read_i2c_block_data(client, sattr->index, sattr->nr,
(u8 *) & val);
(u8 *)&val_le);
if (rc < 0)
return -EIO;
/* Special case: the 32 bit regs are time values with 1/4s
* resolution, scale them up to milliseconds */
if (sattr->nr == 4)
return sprintf(buf, "%llu\n",
((unsigned long long)le32_to_cpu(val)) * 250);
val = le32_to_cpu(val_le);
/* Format the output string and return # of bytes */
return sprintf(buf, "%li\n", (long)le32_to_cpu(val));
if (sattr->index == DS1682_REG_ELAPSED) {
int retries = 5;
/* Detect and retry when a tick occurs mid-read */
do {
rc = i2c_smbus_read_i2c_block_data(client, sattr->index,
sattr->nr,
(u8 *)&val_le);
if (rc < 0 || retries <= 0)
return -EIO;
check = val;
val = le32_to_cpu(val_le);
retries--;
} while (val != check && val != (check + 1));
}
/* Format the output string and return # of bytes
* Special case: the 32 bit regs are time values with 1/4s
* resolution, scale them up to milliseconds
*/
return sprintf(buf, "%llu\n", (sattr->nr == 4) ? (val * 250) : val);
}
static ssize_t ds1682_store(struct device *dev, struct device_attribute *attr,

View File

@ -276,6 +276,9 @@ static int at25_fw_to_chip(struct device *dev, struct spi_eeprom *chip)
return -ENODEV;
}
switch (val) {
case 9:
chip->flags |= EE_INSTR_BIT3_IS_ADDR;
/* fall through */
case 8:
chip->flags |= EE_ADDR1;
break;

View File

@ -468,7 +468,7 @@ static struct class enclosure_class = {
.dev_groups = enclosure_class_groups,
};
static const char *const enclosure_status [] = {
static const char *const enclosure_status[] = {
[ENCLOSURE_STATUS_UNSUPPORTED] = "unsupported",
[ENCLOSURE_STATUS_OK] = "OK",
[ENCLOSURE_STATUS_CRITICAL] = "critical",
@ -480,7 +480,7 @@ static const char *const enclosure_status [] = {
[ENCLOSURE_STATUS_MAX] = NULL,
};
static const char *const enclosure_type [] = {
static const char *const enclosure_type[] = {
[ENCLOSURE_COMPONENT_DEVICE] = "device",
[ENCLOSURE_COMPONENT_ARRAY_DEVICE] = "array device",
};
@ -680,13 +680,7 @@ ATTRIBUTE_GROUPS(enclosure_component);
static int __init enclosure_init(void)
{
int err;
err = class_register(&enclosure_class);
if (err)
return err;
return 0;
return class_register(&enclosure_class);
}
static void __exit enclosure_exit(void)

View File

@ -465,6 +465,7 @@ static int fsa9480_probe(struct i2c_client *client,
static int fsa9480_remove(struct i2c_client *client)
{
struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client);
if (client->irq)
free_irq(client->irq, usbsw);

View File

@ -153,11 +153,11 @@ static struct genwqe_dev *genwqe_dev_alloc(void)
cd->card_state = GENWQE_CARD_UNUSED;
spin_lock_init(&cd->print_lock);
cd->ddcb_software_timeout = genwqe_ddcb_software_timeout;
cd->kill_timeout = genwqe_kill_timeout;
cd->ddcb_software_timeout = GENWQE_DDCB_SOFTWARE_TIMEOUT;
cd->kill_timeout = GENWQE_KILL_TIMEOUT;
for (j = 0; j < GENWQE_MAX_VFS; j++)
cd->vf_jobtimeout_msec[j] = genwqe_vf_jobtimeout_msec;
cd->vf_jobtimeout_msec[j] = GENWQE_VF_JOBTIMEOUT_MSEC;
genwqe_devices[i] = cd;
return cd;
@ -324,11 +324,11 @@ static bool genwqe_setup_pf_jtimer(struct genwqe_dev *cd)
u32 T = genwqe_T_psec(cd);
u64 x;
if (genwqe_pf_jobtimeout_msec == 0)
if (GENWQE_PF_JOBTIMEOUT_MSEC == 0)
return false;
/* PF: large value needed, flash update 2sec per block */
x = ilog2(genwqe_pf_jobtimeout_msec *
x = ilog2(GENWQE_PF_JOBTIMEOUT_MSEC *
16000000000uL/(T * 15)) - 10;
genwqe_write_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT,
@ -904,7 +904,7 @@ static int genwqe_reload_bistream(struct genwqe_dev *cd)
* b) a critical GFIR occured
*
* Informational GFIRs are checked and potentially printed in
* health_check_interval seconds.
* GENWQE_HEALTH_CHECK_INTERVAL seconds.
*/
static int genwqe_health_thread(void *data)
{
@ -918,7 +918,7 @@ static int genwqe_health_thread(void *data)
rc = wait_event_interruptible_timeout(cd->health_waitq,
(genwqe_health_check_cond(cd, &gfir) ||
(should_stop = kthread_should_stop())),
genwqe_health_check_interval * HZ);
GENWQE_HEALTH_CHECK_INTERVAL * HZ);
if (should_stop)
break;
@ -1028,7 +1028,7 @@ static int genwqe_health_check_start(struct genwqe_dev *cd)
{
int rc;
if (genwqe_health_check_interval <= 0)
if (GENWQE_HEALTH_CHECK_INTERVAL <= 0)
return 0; /* valid for disabling the service */
/* moved before request_irq() */

View File

@ -47,13 +47,13 @@
#define GENWQE_CARD_NO_MAX (16 * GENWQE_MAX_FUNCS)
/* Compile parameters, some of them appear in debugfs for later adjustment */
#define genwqe_ddcb_max 32 /* DDCBs on the work-queue */
#define genwqe_polling_enabled 0 /* in case of irqs not working */
#define genwqe_ddcb_software_timeout 10 /* timeout per DDCB in seconds */
#define genwqe_kill_timeout 8 /* time until process gets killed */
#define genwqe_vf_jobtimeout_msec 250 /* 250 msec */
#define genwqe_pf_jobtimeout_msec 8000 /* 8 sec should be ok */
#define genwqe_health_check_interval 4 /* <= 0: disabled */
#define GENWQE_DDCB_MAX 32 /* DDCBs on the work-queue */
#define GENWQE_POLLING_ENABLED 0 /* in case of irqs not working */
#define GENWQE_DDCB_SOFTWARE_TIMEOUT 10 /* timeout per DDCB in seconds */
#define GENWQE_KILL_TIMEOUT 8 /* time until process gets killed */
#define GENWQE_VF_JOBTIMEOUT_MSEC 250 /* 250 msec */
#define GENWQE_PF_JOBTIMEOUT_MSEC 8000 /* 8 sec should be ok */
#define GENWQE_HEALTH_CHECK_INTERVAL 4 /* <= 0: disabled */
/* Sysfs attribute groups used when we create the genwqe device */
extern const struct attribute_group *genwqe_attribute_groups[];
@ -490,11 +490,9 @@ int genwqe_read_app_id(struct genwqe_dev *cd, char *app_name, int len);
/* Memory allocation/deallocation; dma address handling */
int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m,
void *uaddr, unsigned long size,
struct ddcb_requ *req);
void *uaddr, unsigned long size);
int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m,
struct ddcb_requ *req);
int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m);
static inline bool dma_mapping_used(struct dma_mapping *m)
{

View File

@ -500,7 +500,7 @@ int __genwqe_wait_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req)
rc = wait_event_interruptible_timeout(queue->ddcb_waitqs[ddcb_no],
ddcb_requ_finished(cd, req),
genwqe_ddcb_software_timeout * HZ);
GENWQE_DDCB_SOFTWARE_TIMEOUT * HZ);
/*
* We need to distinguish 3 cases here:
@ -633,7 +633,7 @@ int __genwqe_purge_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req)
__be32 old, new;
/* unsigned long flags; */
if (genwqe_ddcb_software_timeout <= 0) {
if (GENWQE_DDCB_SOFTWARE_TIMEOUT <= 0) {
dev_err(&pci_dev->dev,
"[%s] err: software timeout is not set!\n", __func__);
return -EFAULT;
@ -641,7 +641,7 @@ int __genwqe_purge_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req)
pddcb = &queue->ddcb_vaddr[req->num];
for (t = 0; t < genwqe_ddcb_software_timeout * 10; t++) {
for (t = 0; t < GENWQE_DDCB_SOFTWARE_TIMEOUT * 10; t++) {
spin_lock_irqsave(&queue->ddcb_lock, flags);
@ -718,7 +718,7 @@ int __genwqe_purge_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req)
dev_err(&pci_dev->dev,
"[%s] err: DDCB#%d not purged and not completed after %d seconds QSTAT=%016llx!!\n",
__func__, req->num, genwqe_ddcb_software_timeout,
__func__, req->num, GENWQE_DDCB_SOFTWARE_TIMEOUT,
queue_status);
print_ddcb_info(cd, req->queue);
@ -778,7 +778,7 @@ int __genwqe_enqueue_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req,
/* FIXME circumvention to improve performance when no irq is
* there.
*/
if (genwqe_polling_enabled)
if (GENWQE_POLLING_ENABLED)
genwqe_check_ddcb_queue(cd, queue);
/*
@ -878,7 +878,7 @@ int __genwqe_enqueue_ddcb(struct genwqe_dev *cd, struct ddcb_requ *req,
pddcb->icrc_hsi_shi_32 = cpu_to_be32((u32)icrc << 16);
/* enable DDCB completion irq */
if (!genwqe_polling_enabled)
if (!GENWQE_POLLING_ENABLED)
pddcb->icrc_hsi_shi_32 |= DDCB_INTR_BE32;
dev_dbg(&pci_dev->dev, "INPUT DDCB#%d\n", req->num);
@ -1028,10 +1028,10 @@ static int setup_ddcb_queue(struct genwqe_dev *cd, struct ddcb_queue *queue)
unsigned int queue_size;
struct pci_dev *pci_dev = cd->pci_dev;
if (genwqe_ddcb_max < 2)
if (GENWQE_DDCB_MAX < 2)
return -EINVAL;
queue_size = roundup(genwqe_ddcb_max * sizeof(struct ddcb), PAGE_SIZE);
queue_size = roundup(GENWQE_DDCB_MAX * sizeof(struct ddcb), PAGE_SIZE);
queue->ddcbs_in_flight = 0; /* statistics */
queue->ddcbs_max_in_flight = 0;
@ -1040,7 +1040,7 @@ static int setup_ddcb_queue(struct genwqe_dev *cd, struct ddcb_queue *queue)
queue->wait_on_busy = 0;
queue->ddcb_seq = 0x100; /* start sequence number */
queue->ddcb_max = genwqe_ddcb_max; /* module parameter */
queue->ddcb_max = GENWQE_DDCB_MAX;
queue->ddcb_vaddr = __genwqe_alloc_consistent(cd, queue_size,
&queue->ddcb_daddr);
if (queue->ddcb_vaddr == NULL) {
@ -1194,7 +1194,7 @@ static int genwqe_card_thread(void *data)
genwqe_check_ddcb_queue(cd, &cd->queue);
if (genwqe_polling_enabled) {
if (GENWQE_POLLING_ENABLED) {
rc = wait_event_interruptible_timeout(
cd->queue_waitq,
genwqe_ddcbs_in_flight(cd) ||
@ -1340,7 +1340,7 @@ static int queue_wake_up_all(struct genwqe_dev *cd)
int genwqe_finish_queue(struct genwqe_dev *cd)
{
int i, rc = 0, in_flight;
int waitmax = genwqe_ddcb_software_timeout;
int waitmax = GENWQE_DDCB_SOFTWARE_TIMEOUT;
struct pci_dev *pci_dev = cd->pci_dev;
struct ddcb_queue *queue = &cd->queue;

View File

@ -198,7 +198,7 @@ static int genwqe_jtimer_show(struct seq_file *s, void *unused)
jtimer = genwqe_read_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT, 0);
seq_printf(s, " PF 0x%016llx %d msec\n", jtimer,
genwqe_pf_jobtimeout_msec);
GENWQE_PF_JOBTIMEOUT_MSEC);
for (vf_num = 0; vf_num < cd->num_vfs; vf_num++) {
jtimer = genwqe_read_vreg(cd, IO_SLC_VF_APPJOB_TIMEOUT,

View File

@ -226,7 +226,7 @@ static void genwqe_remove_mappings(struct genwqe_file *cfile)
kfree(dma_map);
} else if (dma_map->type == GENWQE_MAPPING_SGL_TEMP) {
/* we use dma_map statically from the request */
genwqe_user_vunmap(cd, dma_map, NULL);
genwqe_user_vunmap(cd, dma_map);
}
}
}
@ -249,7 +249,7 @@ static void genwqe_remove_pinnings(struct genwqe_file *cfile)
* deleted.
*/
list_del_init(&dma_map->pin_list);
genwqe_user_vunmap(cd, dma_map, NULL);
genwqe_user_vunmap(cd, dma_map);
kfree(dma_map);
}
}
@ -790,7 +790,7 @@ static int genwqe_pin_mem(struct genwqe_file *cfile, struct genwqe_mem *m)
return -ENOMEM;
genwqe_mapping_init(dma_map, GENWQE_MAPPING_SGL_PINNED);
rc = genwqe_user_vmap(cd, dma_map, (void *)map_addr, map_size, NULL);
rc = genwqe_user_vmap(cd, dma_map, (void *)map_addr, map_size);
if (rc != 0) {
dev_err(&pci_dev->dev,
"[%s] genwqe_user_vmap rc=%d\n", __func__, rc);
@ -820,7 +820,7 @@ static int genwqe_unpin_mem(struct genwqe_file *cfile, struct genwqe_mem *m)
return -ENOENT;
genwqe_del_pin(cfile, dma_map);
genwqe_user_vunmap(cd, dma_map, NULL);
genwqe_user_vunmap(cd, dma_map);
kfree(dma_map);
return 0;
}
@ -841,7 +841,7 @@ static int ddcb_cmd_cleanup(struct genwqe_file *cfile, struct ddcb_requ *req)
if (dma_mapping_used(dma_map)) {
__genwqe_del_mapping(cfile, dma_map);
genwqe_user_vunmap(cd, dma_map, req);
genwqe_user_vunmap(cd, dma_map);
}
if (req->sgls[i].sgl != NULL)
genwqe_free_sync_sgl(cd, &req->sgls[i]);
@ -947,7 +947,7 @@ static int ddcb_cmd_fixups(struct genwqe_file *cfile, struct ddcb_requ *req)
m->write = 0;
rc = genwqe_user_vmap(cd, m, (void *)u_addr,
u_size, req);
u_size);
if (rc != 0)
goto err_out;
@ -1011,7 +1011,6 @@ static int do_execute_ddcb(struct genwqe_file *cfile,
{
int rc;
struct genwqe_ddcb_cmd *cmd;
struct ddcb_requ *req;
struct genwqe_dev *cd = cfile->cd;
struct file *filp = cfile->filp;
@ -1019,8 +1018,6 @@ static int do_execute_ddcb(struct genwqe_file *cfile,
if (cmd == NULL)
return -ENOMEM;
req = container_of(cmd, struct ddcb_requ, cmd);
if (copy_from_user(cmd, (void __user *)arg, sizeof(*cmd))) {
ddcb_requ_free(cmd);
return -EFAULT;
@ -1345,7 +1342,7 @@ static int genwqe_inform_and_stop_processes(struct genwqe_dev *cd)
rc = genwqe_kill_fasync(cd, SIGIO);
if (rc > 0) {
/* give kill_timeout seconds to close file descriptors ... */
for (i = 0; (i < genwqe_kill_timeout) &&
for (i = 0; (i < GENWQE_KILL_TIMEOUT) &&
genwqe_open_files(cd); i++) {
dev_info(&pci_dev->dev, " %d sec ...", i);
@ -1363,7 +1360,7 @@ static int genwqe_inform_and_stop_processes(struct genwqe_dev *cd)
rc = genwqe_force_sig(cd, SIGKILL); /* force terminate */
if (rc) {
/* Give kill_timout more seconds to end processes */
for (i = 0; (i < genwqe_kill_timeout) &&
for (i = 0; (i < GENWQE_KILL_TIMEOUT) &&
genwqe_open_files(cd); i++) {
dev_warn(&pci_dev->dev, " %d sec ...", i);

View File

@ -524,22 +524,16 @@ int genwqe_free_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl)
}
/**
* free_user_pages() - Give pinned pages back
* genwqe_free_user_pages() - Give pinned pages back
*
* Documentation of get_user_pages is in mm/memory.c:
* Documentation of get_user_pages is in mm/gup.c:
*
* If the page is written to, set_page_dirty (or set_page_dirty_lock,
* as appropriate) must be called after the page is finished with, and
* before put_page is called.
*
* FIXME Could be of use to others and might belong in the generic
* code, if others agree. E.g.
* ll_free_user_pages in drivers/staging/lustre/lustre/llite/rw26.c
* ceph_put_page_vector in net/ceph/pagevec.c
* maybe more?
*/
static int free_user_pages(struct page **page_list, unsigned int nr_pages,
int dirty)
static int genwqe_free_user_pages(struct page **page_list,
unsigned int nr_pages, int dirty)
{
unsigned int i;
@ -577,7 +571,7 @@ static int free_user_pages(struct page **page_list, unsigned int nr_pages,
* Return: 0 if success
*/
int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr,
unsigned long size, struct ddcb_requ *req)
unsigned long size)
{
int rc = -EINVAL;
unsigned long data, offs;
@ -617,7 +611,7 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr,
/* assumption: get_user_pages can be killed by signals. */
if (rc < m->nr_pages) {
free_user_pages(m->page_list, rc, m->write);
genwqe_free_user_pages(m->page_list, rc, m->write);
rc = -EFAULT;
goto fail_get_user_pages;
}
@ -629,7 +623,7 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr,
return 0;
fail_free_user_pages:
free_user_pages(m->page_list, m->nr_pages, m->write);
genwqe_free_user_pages(m->page_list, m->nr_pages, m->write);
fail_get_user_pages:
kfree(m->page_list);
@ -647,8 +641,7 @@ int genwqe_user_vmap(struct genwqe_dev *cd, struct dma_mapping *m, void *uaddr,
* @cd: pointer to genwqe device
* @m: mapping params
*/
int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m,
struct ddcb_requ *req)
int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m)
{
struct pci_dev *pci_dev = cd->pci_dev;
@ -662,7 +655,7 @@ int genwqe_user_vunmap(struct genwqe_dev *cd, struct dma_mapping *m,
genwqe_unmap_pages(cd, m->dma_list, m->nr_pages);
if (m->page_list) {
free_user_pages(m->page_list, m->nr_pages, m->write);
genwqe_free_user_pages(m->page_list, m->nr_pages, m->write);
kfree(m->page_list);
m->page_list = NULL;

View File

@ -1,12 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Driver for the HP iLO management processor.
*
* Copyright (C) 2008 Hewlett-Packard Development Company, L.P.
* David Altobelli <david.altobelli@hpe.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/kernel.h>
#include <linux/types.h>

View File

@ -1,12 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/*
* linux/drivers/char/hpilo.h
*
* Copyright (C) 2008 Hewlett-Packard Development Company, L.P.
* David Altobelli <david.altobelli@hp.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef __HPILO_H
#define __HPILO_H

View File

@ -33,7 +33,7 @@ static const unsigned short normal_i2c[] = { 0x69, I2C_CLIENT_END };
/* ICS932S401 registers */
#define ICS932S401_REG_CFG2 0x01
#define ICS932S401_CFG1_SPREAD 0x01
#define ICS932S401_CFG1_SPREAD 0x01
#define ICS932S401_REG_CFG7 0x06
#define ICS932S401_FS_MASK 0x07
#define ICS932S401_REG_VENDOR_REV 0x07
@ -58,7 +58,7 @@ static const unsigned short normal_i2c[] = { 0x69, I2C_CLIENT_END };
#define ICS932S401_REG_SRC_SPREAD1 0x11
#define ICS932S401_REG_SRC_SPREAD2 0x12
#define ICS932S401_REG_CPU_DIVISOR 0x13
#define ICS932S401_CPU_DIVISOR_SHIFT 4
#define ICS932S401_CPU_DIVISOR_SHIFT 4
#define ICS932S401_REG_PCISRC_DIVISOR 0x14
#define ICS932S401_SRC_DIVISOR_MASK 0x0F
#define ICS932S401_PCI_DIVISOR_SHIFT 4
@ -225,6 +225,7 @@ static ssize_t show_cpu_clock_sel(struct device *dev,
else {
/* Freq is neatly wrapped up for us */
int fid = data->regs[ICS932S401_REG_CFG7] & ICS932S401_FS_MASK;
freq = fs_speeds[fid];
if (data->regs[ICS932S401_REG_CTRL] & ICS932S401_CPU_ALT) {
switch (freq) {
@ -352,8 +353,7 @@ static DEVICE_ATTR(ref_clock, S_IRUGO, show_value, NULL);
static DEVICE_ATTR(cpu_spread, S_IRUGO, show_spread, NULL);
static DEVICE_ATTR(src_spread, S_IRUGO, show_spread, NULL);
static struct attribute *ics932s401_attr[] =
{
static struct attribute *ics932s401_attr[] = {
&dev_attr_spread_enabled.attr,
&dev_attr_cpu_clock_selection.attr,
&dev_attr_cpu_clock.attr,

View File

@ -78,6 +78,7 @@ static int __isl29003_read_reg(struct i2c_client *client,
u32 reg, u8 mask, u8 shift)
{
struct isl29003_data *data = i2c_get_clientdata(client);
return (data->reg_cache[reg] & mask) >> shift;
}
@ -160,6 +161,7 @@ static int isl29003_get_power_state(struct i2c_client *client)
{
struct isl29003_data *data = i2c_get_clientdata(client);
u8 cmdreg = data->reg_cache[ISL29003_REG_COMMAND];
return ~cmdreg & ISL29003_ADC_PD;
}
@ -196,6 +198,7 @@ static ssize_t isl29003_show_range(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct i2c_client *client = to_i2c_client(dev);
return sprintf(buf, "%i\n", isl29003_get_range(client));
}
@ -231,6 +234,7 @@ static ssize_t isl29003_show_resolution(struct device *dev,
char *buf)
{
struct i2c_client *client = to_i2c_client(dev);
return sprintf(buf, "%d\n", isl29003_get_resolution(client));
}
@ -264,6 +268,7 @@ static ssize_t isl29003_show_mode(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct i2c_client *client = to_i2c_client(dev);
return sprintf(buf, "%d\n", isl29003_get_mode(client));
}
@ -298,6 +303,7 @@ static ssize_t isl29003_show_power_state(struct device *dev,
char *buf)
{
struct i2c_client *client = to_i2c_client(dev);
return sprintf(buf, "%d\n", isl29003_get_power_state(client));
}
@ -361,6 +367,7 @@ static int isl29003_init_client(struct i2c_client *client)
* if one of the reads fails, we consider the init failed */
for (i = 0; i < ARRAY_SIZE(data->reg_cache); i++) {
int v = i2c_smbus_read_byte_data(client, i);
if (v < 0)
return -ENODEV;

View File

@ -96,7 +96,7 @@ static struct crashpoint crashpoints[] = {
CRASHPOINT("DIRECT", NULL),
#ifdef CONFIG_KPROBES
CRASHPOINT("INT_HARDWARE_ENTRY", "do_IRQ"),
CRASHPOINT("INT_HW_IRQ_EN", "handle_IRQ_event"),
CRASHPOINT("INT_HW_IRQ_EN", "handle_irq_event"),
CRASHPOINT("INT_TASKLET_ENTRY", "tasklet_action"),
CRASHPOINT("FS_DEVRW", "ll_rw_block"),
CRASHPOINT("MEM_SWAPOUT", "shrink_inactive_list"),

View File

@ -16,6 +16,8 @@ void lkdtm_OVERWRITE_ALLOCATION(void)
{
size_t len = 1020;
u32 *data = kmalloc(len, GFP_KERNEL);
if (!data)
return;
data[1024 / sizeof(u32)] = 0x12345678;
kfree(data);
@ -33,6 +35,8 @@ void lkdtm_WRITE_AFTER_FREE(void)
size_t offset = (len / sizeof(*base)) / 2;
base = kmalloc(len, GFP_KERNEL);
if (!base)
return;
pr_info("Allocated memory %p-%p\n", base, &base[offset * 2]);
pr_info("Attempting bad write to freed memory at %p\n",
&base[offset]);

View File

@ -543,14 +543,20 @@ int mei_cldev_disable(struct mei_cl_device *cldev)
mutex_lock(&bus->device_lock);
if (!mei_cl_is_connected(cl)) {
dev_dbg(bus->dev, "Already disconnected");
dev_dbg(bus->dev, "Already disconnected\n");
err = 0;
goto out;
}
if (bus->dev_state == MEI_DEV_POWER_DOWN) {
dev_dbg(bus->dev, "Device is powering down, don't bother with disconnection\n");
err = 0;
goto out;
}
err = mei_cl_disconnect(cl);
if (err < 0)
dev_err(bus->dev, "Could not disconnect from the ME client");
dev_err(bus->dev, "Could not disconnect from the ME client\n");
out:
/* Flush queues and remove any pending read */

View File

@ -1260,7 +1260,9 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id)
if (rets == -ENODATA)
break;
if (rets && dev->dev_state != MEI_DEV_RESETTING) {
if (rets &&
(dev->dev_state != MEI_DEV_RESETTING &&
dev->dev_state != MEI_DEV_POWER_DOWN)) {
dev_err(dev->dev, "mei_irq_read_handler ret = %d.\n",
rets);
schedule_work(&dev->reset_work);

View File

@ -1127,7 +1127,9 @@ irqreturn_t mei_txe_irq_thread_handler(int irq, void *dev_id)
if (test_and_clear_bit(TXE_INTR_OUT_DB_BIT, &hw->intr_cause)) {
/* Read from TXE */
rets = mei_irq_read_handler(dev, &cmpl_list, &slots);
if (rets && dev->dev_state != MEI_DEV_RESETTING) {
if (rets &&
(dev->dev_state != MEI_DEV_RESETTING &&
dev->dev_state != MEI_DEV_POWER_DOWN)) {
dev_err(dev->dev,
"mei_irq_read_handler ret = %d.\n", rets);

View File

@ -310,6 +310,9 @@ void mei_stop(struct mei_device *dev)
{
dev_dbg(dev->dev, "stopping the device.\n");
mutex_lock(&dev->device_lock);
dev->dev_state = MEI_DEV_POWER_DOWN;
mutex_unlock(&dev->device_lock);
mei_cl_bus_remove_devices(dev);
mei_cancel_work(dev);
@ -319,7 +322,6 @@ void mei_stop(struct mei_device *dev)
mutex_lock(&dev->device_lock);
dev->dev_state = MEI_DEV_POWER_DOWN;
mei_reset(dev);
/* move device to disabled state unconditionally */
dev->dev_state = MEI_DEV_DISABLED;

View File

@ -238,8 +238,11 @@ static int mei_me_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
*/
mei_me_set_pm_domain(dev);
if (mei_pg_is_enabled(dev))
if (mei_pg_is_enabled(dev)) {
pm_runtime_put_noidle(&pdev->dev);
if (hw->d0i3_supported)
pm_runtime_allow(&pdev->dev);
}
dev_dbg(&pdev->dev, "initialization successful.\n");

View File

@ -937,13 +937,10 @@ static long vop_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
dd.num_vq > MIC_MAX_VRINGS)
return -EINVAL;
dd_config = kzalloc(mic_desc_size(&dd), GFP_KERNEL);
if (!dd_config)
return -ENOMEM;
if (copy_from_user(dd_config, argp, mic_desc_size(&dd))) {
ret = -EFAULT;
goto free_ret;
}
dd_config = memdup_user(argp, mic_desc_size(&dd));
if (IS_ERR(dd_config))
return PTR_ERR(dd_config);
/* Ensure desc has not changed between the two reads */
if (memcmp(&dd, dd_config, sizeof(dd))) {
ret = -EINVAL;
@ -995,17 +992,12 @@ static long vop_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
ret = vop_vdev_inited(vdev);
if (ret)
goto __unlock_ret;
buf = kzalloc(vdev->dd->config_len, GFP_KERNEL);
if (!buf) {
ret = -ENOMEM;
buf = memdup_user(argp, vdev->dd->config_len);
if (IS_ERR(buf)) {
ret = PTR_ERR(buf);
goto __unlock_ret;
}
if (copy_from_user(buf, argp, vdev->dd->config_len)) {
ret = -EFAULT;
goto done;
}
ret = vop_virtio_config_change(vdev, buf);
done:
kfree(buf);
__unlock_ret:
mutex_unlock(&vdev->vdev_mutex);

View File

@ -270,10 +270,8 @@ static int vexpress_syscfg_probe(struct platform_device *pdev)
/* Must use dev.parent (MFD), as that's where DT phandle points at... */
bridge = vexpress_config_bridge_register(pdev->dev.parent,
&vexpress_syscfg_bridge_ops, syscfg);
if (IS_ERR(bridge))
return PTR_ERR(bridge);
return 0;
return PTR_ERR_OR_ZERO(bridge);
}
static const struct platform_device_id vexpress_syscfg_id_table[] = {

View File

@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
#
# Multiplexer devices
#

View File

@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for multiplexer devices.
#

View File

@ -1,13 +1,10 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Multiplexer driver for Analog Devices ADG792A/G Triple 4:1 mux
*
* Copyright (C) 2017 Axentia Technologies AB
*
* Author: Peter Rosin <peda@axentia.se>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/err.h>

View File

@ -1,13 +1,10 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Multiplexer subsystem
*
* Copyright (C) 2017 Axentia Technologies AB
*
* Author: Peter Rosin <peda@axentia.se>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#define pr_fmt(fmt) "mux-core: " fmt

View File

@ -1,13 +1,10 @@
// SPDX-License-Identifier: GPL-2.0
/*
* GPIO-controlled multiplexer driver
*
* Copyright (C) 2017 Axentia Technologies AB
*
* Author: Peter Rosin <peda@axentia.se>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/err.h>

View File

@ -1,11 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
/*
* MMIO register bitfield-controlled multiplexer driver
*
* Copyright (C) 2017 Pengutronix, Philipp Zabel <kernel@pengutronix.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/bitops.h>

View File

@ -444,7 +444,6 @@ static int nvmem_setup_compat(struct nvmem_device *nvmem,
struct nvmem_device *nvmem_register(const struct nvmem_config *config)
{
struct nvmem_device *nvmem;
struct device_node *np;
int rval;
if (!config->dev)
@ -464,8 +463,8 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
nvmem->owner = config->owner;
if (!nvmem->owner && config->dev->driver)
nvmem->owner = config->dev->driver->owner;
nvmem->stride = config->stride;
nvmem->word_size = config->word_size;
nvmem->stride = config->stride ?: 1;
nvmem->word_size = config->word_size ?: 1;
nvmem->size = config->size;
nvmem->dev.type = &nvmem_provider_type;
nvmem->dev.bus = &nvmem_bus_type;
@ -473,13 +472,12 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config)
nvmem->priv = config->priv;
nvmem->reg_read = config->reg_read;
nvmem->reg_write = config->reg_write;
np = config->dev->of_node;
nvmem->dev.of_node = np;
nvmem->dev.of_node = config->dev->of_node;
dev_set_name(&nvmem->dev, "%s%d",
config->name ? : "nvmem",
config->name ? config->id : nvmem->id);
nvmem->read_only = of_property_read_bool(np, "read-only") |
nvmem->read_only = device_property_present(config->dev, "read-only") |
config->read_only;
if (config->root_only)
@ -600,16 +598,11 @@ static void __nvmem_device_put(struct nvmem_device *nvmem)
mutex_unlock(&nvmem_mutex);
}
static int nvmem_match(struct device *dev, void *data)
{
return !strcmp(dev_name(dev), data);
}
static struct nvmem_device *nvmem_find(const char *name)
{
struct device *d;
d = bus_find_device(&nvmem_bus_type, NULL, (void *)name, nvmem_match);
d = bus_find_device_by_name(&nvmem_bus_type, NULL, name);
if (!d)
return NULL;

View File

@ -32,6 +32,14 @@
#define RK3288_STROBE BIT(1)
#define RK3288_CSB BIT(0)
#define RK3328_SECURE_SIZES 96
#define RK3328_INT_STATUS 0x0018
#define RK3328_DOUT 0x0020
#define RK3328_AUTO_CTRL 0x0024
#define RK3328_INT_FINISH BIT(0)
#define RK3328_AUTO_ENB BIT(0)
#define RK3328_AUTO_RD BIT(1)
#define RK3399_A_SHIFT 16
#define RK3399_A_MASK 0x3ff
#define RK3399_NBYTES 4
@ -92,6 +100,60 @@ static int rockchip_rk3288_efuse_read(void *context, unsigned int offset,
return 0;
}
static int rockchip_rk3328_efuse_read(void *context, unsigned int offset,
void *val, size_t bytes)
{
struct rockchip_efuse_chip *efuse = context;
unsigned int addr_start, addr_end, addr_offset, addr_len;
u32 out_value, status;
u8 *buf;
int ret, i = 0;
ret = clk_prepare_enable(efuse->clk);
if (ret < 0) {
dev_err(efuse->dev, "failed to prepare/enable efuse clk\n");
return ret;
}
/* 128 Byte efuse, 96 Byte for secure, 32 Byte for non-secure */
offset += RK3328_SECURE_SIZES;
addr_start = rounddown(offset, RK3399_NBYTES) / RK3399_NBYTES;
addr_end = roundup(offset + bytes, RK3399_NBYTES) / RK3399_NBYTES;
addr_offset = offset % RK3399_NBYTES;
addr_len = addr_end - addr_start;
buf = kzalloc(sizeof(*buf) * addr_len * RK3399_NBYTES, GFP_KERNEL);
if (!buf) {
ret = -ENOMEM;
goto nomem;
}
while (addr_len--) {
writel(RK3328_AUTO_RD | RK3328_AUTO_ENB |
((addr_start++ & RK3399_A_MASK) << RK3399_A_SHIFT),
efuse->base + RK3328_AUTO_CTRL);
udelay(4);
status = readl(efuse->base + RK3328_INT_STATUS);
if (!(status & RK3328_INT_FINISH)) {
ret = -EIO;
goto err;
}
out_value = readl(efuse->base + RK3328_DOUT);
writel(RK3328_INT_FINISH, efuse->base + RK3328_INT_STATUS);
memcpy(&buf[i], &out_value, RK3399_NBYTES);
i += RK3399_NBYTES;
}
memcpy(val, buf + addr_offset, bytes);
err:
kfree(buf);
nomem:
clk_disable_unprepare(efuse->clk);
return ret;
}
static int rockchip_rk3399_efuse_read(void *context, unsigned int offset,
void *val, size_t bytes)
{
@ -180,6 +242,10 @@ static const struct of_device_id rockchip_efuse_match[] = {
.compatible = "rockchip,rk3368-efuse",
.data = (void *)&rockchip_rk3288_efuse_read,
},
{
.compatible = "rockchip,rk3328-efuse",
.data = (void *)&rockchip_rk3328_efuse_read,
},
{
.compatible = "rockchip,rk3399-efuse",
.data = (void *)&rockchip_rk3399_efuse_read,
@ -217,7 +283,9 @@ static int rockchip_efuse_probe(struct platform_device *pdev)
return PTR_ERR(efuse->clk);
efuse->dev = &pdev->dev;
econfig.size = resource_size(res);
if (of_property_read_u32(dev->of_node, "rockchip,efuse-size",
&econfig.size))
econfig.size = resource_size(res);
econfig.reg_read = match->data;
econfig.priv = efuse;
econfig.dev = efuse->dev;

View File

@ -27,11 +27,11 @@ static int uniphier_reg_read(void *context,
unsigned int reg, void *_val, size_t bytes)
{
struct uniphier_efuse_priv *priv = context;
u32 *val = _val;
u8 *val = _val;
int offs;
for (offs = 0; offs < bytes; offs += sizeof(u32))
*val++ = readl(priv->base + reg + offs);
for (offs = 0; offs < bytes; offs += sizeof(u8))
*val++ = readb(priv->base + reg + offs);
return 0;
}
@ -53,8 +53,8 @@ static int uniphier_efuse_probe(struct platform_device *pdev)
if (IS_ERR(priv->base))
return PTR_ERR(priv->base);
econfig.stride = 4;
econfig.word_size = 4;
econfig.stride = 1;
econfig.word_size = 1;
econfig.read_only = true;
econfig.reg_read = uniphier_reg_read;
econfig.size = resource_size(res);

18
drivers/siox/Kconfig Normal file
View File

@ -0,0 +1,18 @@
menuconfig SIOX
tristate "Eckelmann SIOX Support"
help
SIOX stands for Serial Input Output eXtension and is a synchronous
bus system invented by Eckelmann AG. It is used in their control and
remote monitoring systems for commercial and industrial refrigeration
to drive additional I/O units.
Unless you know better, it is probably safe to say "no" here.
if SIOX
config SIOX_BUS_GPIO
tristate "SIOX GPIO bus driver"
help
SIOX bus driver that controls the four bus lines using GPIOs.
endif

2
drivers/siox/Makefile Normal file
View File

@ -0,0 +1,2 @@
obj-$(CONFIG_SIOX) += siox-core.o
obj-$(CONFIG_SIOX_BUS_GPIO) += siox-bus-gpio.o

View File

@ -0,0 +1,172 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2017 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de>
*/
#include <linux/gpio/consumer.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include "siox.h"
#define DRIVER_NAME "siox-gpio"
struct siox_gpio_ddata {
struct gpio_desc *din;
struct gpio_desc *dout;
struct gpio_desc *dclk;
struct gpio_desc *dld;
};
static unsigned int siox_clkhigh_ns = 1000;
static unsigned int siox_loadhigh_ns;
static unsigned int siox_bytegap_ns;
static int siox_gpio_pushpull(struct siox_master *smaster,
size_t setbuf_len, const u8 setbuf[],
size_t getbuf_len, u8 getbuf[])
{
struct siox_gpio_ddata *ddata = siox_master_get_devdata(smaster);
size_t i;
size_t cycles = max(setbuf_len, getbuf_len);
/* reset data and clock */
gpiod_set_value_cansleep(ddata->dout, 0);
gpiod_set_value_cansleep(ddata->dclk, 0);
gpiod_set_value_cansleep(ddata->dld, 1);
ndelay(siox_loadhigh_ns);
gpiod_set_value_cansleep(ddata->dld, 0);
for (i = 0; i < cycles; ++i) {
u8 set = 0, get = 0;
size_t j;
if (i >= cycles - setbuf_len)
set = setbuf[i - (cycles - setbuf_len)];
for (j = 0; j < 8; ++j) {
get <<= 1;
if (gpiod_get_value_cansleep(ddata->din))
get |= 1;
/* DOUT is logically inverted */
gpiod_set_value_cansleep(ddata->dout, !(set & 0x80));
set <<= 1;
gpiod_set_value_cansleep(ddata->dclk, 1);
ndelay(siox_clkhigh_ns);
gpiod_set_value_cansleep(ddata->dclk, 0);
}
if (i < getbuf_len)
getbuf[i] = get;
ndelay(siox_bytegap_ns);
}
gpiod_set_value_cansleep(ddata->dld, 1);
ndelay(siox_loadhigh_ns);
gpiod_set_value_cansleep(ddata->dld, 0);
/*
* Resetting dout isn't necessary protocol wise, but it makes the
* signals more pretty because the dout level is deterministic between
* cycles. Note that this only affects dout between the master and the
* first siox device. dout for the later devices depend on the output of
* the previous siox device.
*/
gpiod_set_value_cansleep(ddata->dout, 0);
return 0;
}
static int siox_gpio_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct siox_gpio_ddata *ddata;
int ret;
struct siox_master *smaster;
smaster = siox_master_alloc(&pdev->dev, sizeof(*ddata));
if (!smaster) {
dev_err(dev, "failed to allocate siox master\n");
return -ENOMEM;
}
platform_set_drvdata(pdev, smaster);
ddata = siox_master_get_devdata(smaster);
ddata->din = devm_gpiod_get(dev, "din", GPIOD_IN);
if (IS_ERR(ddata->din)) {
ret = PTR_ERR(ddata->din);
dev_err(dev, "Failed to get %s GPIO: %d\n", "din", ret);
goto err;
}
ddata->dout = devm_gpiod_get(dev, "dout", GPIOD_OUT_LOW);
if (IS_ERR(ddata->dout)) {
ret = PTR_ERR(ddata->dout);
dev_err(dev, "Failed to get %s GPIO: %d\n", "dout", ret);
goto err;
}
ddata->dclk = devm_gpiod_get(dev, "dclk", GPIOD_OUT_LOW);
if (IS_ERR(ddata->dclk)) {
ret = PTR_ERR(ddata->dclk);
dev_err(dev, "Failed to get %s GPIO: %d\n", "dclk", ret);
goto err;
}
ddata->dld = devm_gpiod_get(dev, "dld", GPIOD_OUT_LOW);
if (IS_ERR(ddata->dld)) {
ret = PTR_ERR(ddata->dld);
dev_err(dev, "Failed to get %s GPIO: %d\n", "dld", ret);
goto err;
}
smaster->pushpull = siox_gpio_pushpull;
/* XXX: determine automatically like spi does */
smaster->busno = 0;
ret = siox_master_register(smaster);
if (ret) {
dev_err(dev, "Failed to register siox master: %d\n", ret);
err:
siox_master_put(smaster);
}
return ret;
}
static int siox_gpio_remove(struct platform_device *pdev)
{
struct siox_master *master = platform_get_drvdata(pdev);
siox_master_unregister(master);
return 0;
}
static const struct of_device_id siox_gpio_dt_ids[] = {
{ .compatible = "eckelmann,siox-gpio", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, siox_gpio_dt_ids);
static struct platform_driver siox_gpio_driver = {
.probe = siox_gpio_probe,
.remove = siox_gpio_remove,
.driver = {
.name = DRIVER_NAME,
.of_match_table = siox_gpio_dt_ids,
},
};
module_platform_driver(siox_gpio_driver);
MODULE_AUTHOR("Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:" DRIVER_NAME);

934
drivers/siox/siox-core.c Normal file
View File

@ -0,0 +1,934 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2017 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de>
*/
#include <linux/kernel.h>
#include <linux/device.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/sysfs.h>
#include "siox.h"
/*
* The lowest bit in the SIOX status word signals if the in-device watchdog is
* ok. If the bit is set, the device is functional.
*
* On writing the watchdog timer is reset when this bit toggles.
*/
#define SIOX_STATUS_WDG 0x01
/*
* Bits 1 to 3 of the status word read as the bitwise negation of what was
* clocked in before. The value clocked in is changed in each cycle and so
* allows to detect transmit/receive problems.
*/
#define SIOX_STATUS_COUNTER 0x0e
/*
* Each Siox-Device has a 4 bit type number that is neither 0 nor 15. This is
* available in the upper nibble of the read status.
*
* On write these bits are DC.
*/
#define SIOX_STATUS_TYPE 0xf0
#define CREATE_TRACE_POINTS
#include <trace/events/siox.h>
static bool siox_is_registered;
static void siox_master_lock(struct siox_master *smaster)
{
mutex_lock(&smaster->lock);
}
static void siox_master_unlock(struct siox_master *smaster)
{
mutex_unlock(&smaster->lock);
}
static inline u8 siox_status_clean(u8 status_read, u8 status_written)
{
/*
* bits 3:1 of status sample the respective bit in the status
* byte written in the previous cycle but inverted. So if you wrote the
* status word as 0xa before (counter = 0b101), it is expected to get
* back the counter bits as 0b010.
*
* So given the last status written this function toggles the there
* unset counter bits in the read value such that the counter bits in
* the return value are all zero iff the bits were read as expected to
* simplify error detection.
*/
return status_read ^ (~status_written & 0xe);
}
static bool siox_device_counter_error(struct siox_device *sdevice,
u8 status_clean)
{
return (status_clean & SIOX_STATUS_COUNTER) != 0;
}
static bool siox_device_type_error(struct siox_device *sdevice, u8 status_clean)
{
u8 statustype = (status_clean & SIOX_STATUS_TYPE) >> 4;
/*
* If the device knows which value the type bits should have, check
* against this value otherwise just rule out the invalid values 0b0000
* and 0b1111.
*/
if (sdevice->statustype) {
if (statustype != sdevice->statustype)
return true;
} else {
switch (statustype) {
case 0:
case 0xf:
return true;
}
}
return false;
}
static bool siox_device_wdg_error(struct siox_device *sdevice, u8 status_clean)
{
return (status_clean & SIOX_STATUS_WDG) == 0;
}
/*
* If there is a type or counter error the device is called "unsynced".
*/
bool siox_device_synced(struct siox_device *sdevice)
{
if (siox_device_type_error(sdevice, sdevice->status_read_clean))
return false;
return !siox_device_counter_error(sdevice, sdevice->status_read_clean);
}
EXPORT_SYMBOL_GPL(siox_device_synced);
/*
* A device is called "connected" if it is synced and the watchdog is not
* asserted.
*/
bool siox_device_connected(struct siox_device *sdevice)
{
if (!siox_device_synced(sdevice))
return false;
return !siox_device_wdg_error(sdevice, sdevice->status_read_clean);
}
EXPORT_SYMBOL_GPL(siox_device_connected);
static void siox_poll(struct siox_master *smaster)
{
struct siox_device *sdevice;
size_t i = smaster->setbuf_len;
unsigned int devno = 0;
int unsync_error = 0;
smaster->last_poll = jiffies;
/*
* The counter bits change in each second cycle, the watchdog bit
* toggles each time.
* The counter bits hold values from [0, 6]. 7 would be possible
* theoretically but the protocol designer considered that a bad idea
* for reasons unknown today. (Maybe that's because then the status read
* back has only zeros in the counter bits then which might be confused
* with a stuck-at-0 error. But for the same reason (with s/0/1/) 0
* could be skipped.)
*/
if (++smaster->status > 0x0d)
smaster->status = 0;
memset(smaster->buf, 0, smaster->setbuf_len);
/* prepare data pushed out to devices in buf[0..setbuf_len) */
list_for_each_entry(sdevice, &smaster->devices, node) {
struct siox_driver *sdriver =
to_siox_driver(sdevice->dev.driver);
sdevice->status_written = smaster->status;
i -= sdevice->inbytes;
/*
* If the device or a previous one is unsynced, don't pet the
* watchdog. This is done to ensure that the device is kept in
* reset when something is wrong.
*/
if (!siox_device_synced(sdevice))
unsync_error = 1;
if (sdriver && !unsync_error)
sdriver->set_data(sdevice, sdevice->status_written,
&smaster->buf[i + 1]);
else
/*
* Don't trigger watchdog if there is no driver or a
* sync problem
*/
sdevice->status_written &= ~SIOX_STATUS_WDG;
smaster->buf[i] = sdevice->status_written;
trace_siox_set_data(smaster, sdevice, devno, i);
devno++;
}
smaster->pushpull(smaster, smaster->setbuf_len, smaster->buf,
smaster->getbuf_len,
smaster->buf + smaster->setbuf_len);
unsync_error = 0;
/* interpret data pulled in from devices in buf[setbuf_len..] */
devno = 0;
i = smaster->setbuf_len;
list_for_each_entry(sdevice, &smaster->devices, node) {
struct siox_driver *sdriver =
to_siox_driver(sdevice->dev.driver);
u8 status = smaster->buf[i + sdevice->outbytes - 1];
u8 status_clean;
u8 prev_status_clean = sdevice->status_read_clean;
bool synced = true;
bool connected = true;
if (!siox_device_synced(sdevice))
unsync_error = 1;
/*
* If the watchdog bit wasn't toggled in this cycle, report the
* watchdog as active to give a consistent view for drivers and
* sysfs consumers.
*/
if (!sdriver || unsync_error)
status &= ~SIOX_STATUS_WDG;
status_clean =
siox_status_clean(status,
sdevice->status_written_lastcycle);
/* Check counter bits */
if (siox_device_counter_error(sdevice, status_clean)) {
bool prev_counter_error;
synced = false;
/* only report a new error if the last cycle was ok */
prev_counter_error =
siox_device_counter_error(sdevice,
prev_status_clean);
if (!prev_counter_error) {
sdevice->status_errors++;
sysfs_notify_dirent(sdevice->status_errors_kn);
}
}
/* Check type bits */
if (siox_device_type_error(sdevice, status_clean))
synced = false;
/* If the device is unsynced report the watchdog as active */
if (!synced) {
status &= ~SIOX_STATUS_WDG;
status_clean &= ~SIOX_STATUS_WDG;
}
if (siox_device_wdg_error(sdevice, status_clean))
connected = false;
/* The watchdog state changed just now */
if ((status_clean ^ prev_status_clean) & SIOX_STATUS_WDG) {
sysfs_notify_dirent(sdevice->watchdog_kn);
if (siox_device_wdg_error(sdevice, status_clean)) {
struct kernfs_node *wd_errs =
sdevice->watchdog_errors_kn;
sdevice->watchdog_errors++;
sysfs_notify_dirent(wd_errs);
}
}
if (connected != sdevice->connected)
sysfs_notify_dirent(sdevice->connected_kn);
sdevice->status_read_clean = status_clean;
sdevice->status_written_lastcycle = sdevice->status_written;
sdevice->connected = connected;
trace_siox_get_data(smaster, sdevice, devno, status_clean, i);
/* only give data read to driver if the device is connected */
if (sdriver && connected)
sdriver->get_data(sdevice, &smaster->buf[i]);
devno++;
i += sdevice->outbytes;
}
}
static int siox_poll_thread(void *data)
{
struct siox_master *smaster = data;
signed long timeout = 0;
get_device(&smaster->dev);
for (;;) {
if (kthread_should_stop()) {
put_device(&smaster->dev);
return 0;
}
siox_master_lock(smaster);
if (smaster->active) {
unsigned long next_poll =
smaster->last_poll + smaster->poll_interval;
if (time_is_before_eq_jiffies(next_poll))
siox_poll(smaster);
timeout = smaster->poll_interval -
(jiffies - smaster->last_poll);
} else {
timeout = MAX_SCHEDULE_TIMEOUT;
}
/*
* Set the task to idle while holding the lock. This makes sure
* that we don't sleep too long when the bus is reenabled before
* schedule_timeout is reached.
*/
if (timeout > 0)
set_current_state(TASK_IDLE);
siox_master_unlock(smaster);
if (timeout > 0)
schedule_timeout(timeout);
/*
* I'm not clear if/why it is important to set the state to
* RUNNING again, but it fixes a "do not call blocking ops when
* !TASK_RUNNING;"-warning.
*/
set_current_state(TASK_RUNNING);
}
}
static int __siox_start(struct siox_master *smaster)
{
if (!(smaster->setbuf_len + smaster->getbuf_len))
return -ENODEV;
if (!smaster->buf)
return -ENOMEM;
if (smaster->active)
return 0;
smaster->active = 1;
wake_up_process(smaster->poll_thread);
return 1;
}
static int siox_start(struct siox_master *smaster)
{
int ret;
siox_master_lock(smaster);
ret = __siox_start(smaster);
siox_master_unlock(smaster);
return ret;
}
static int __siox_stop(struct siox_master *smaster)
{
if (smaster->active) {
struct siox_device *sdevice;
smaster->active = 0;
list_for_each_entry(sdevice, &smaster->devices, node) {
if (sdevice->connected)
sysfs_notify_dirent(sdevice->connected_kn);
sdevice->connected = false;
}
return 1;
}
return 0;
}
static int siox_stop(struct siox_master *smaster)
{
int ret;
siox_master_lock(smaster);
ret = __siox_stop(smaster);
siox_master_unlock(smaster);
return ret;
}
static ssize_t type_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct siox_device *sdev = to_siox_device(dev);
return sprintf(buf, "%s\n", sdev->type);
}
static DEVICE_ATTR_RO(type);
static ssize_t inbytes_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct siox_device *sdev = to_siox_device(dev);
return sprintf(buf, "%zu\n", sdev->inbytes);
}
static DEVICE_ATTR_RO(inbytes);
static ssize_t outbytes_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct siox_device *sdev = to_siox_device(dev);
return sprintf(buf, "%zu\n", sdev->outbytes);
}
static DEVICE_ATTR_RO(outbytes);
static ssize_t status_errors_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct siox_device *sdev = to_siox_device(dev);
unsigned int status_errors;
siox_master_lock(sdev->smaster);
status_errors = sdev->status_errors;
siox_master_unlock(sdev->smaster);
return sprintf(buf, "%u\n", status_errors);
}
static DEVICE_ATTR_RO(status_errors);
static ssize_t connected_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct siox_device *sdev = to_siox_device(dev);
bool connected;
siox_master_lock(sdev->smaster);
connected = sdev->connected;
siox_master_unlock(sdev->smaster);
return sprintf(buf, "%u\n", connected);
}
static DEVICE_ATTR_RO(connected);
static ssize_t watchdog_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct siox_device *sdev = to_siox_device(dev);
u8 status;
siox_master_lock(sdev->smaster);
status = sdev->status_read_clean;
siox_master_unlock(sdev->smaster);
return sprintf(buf, "%d\n", status & SIOX_STATUS_WDG);
}
static DEVICE_ATTR_RO(watchdog);
static ssize_t watchdog_errors_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct siox_device *sdev = to_siox_device(dev);
unsigned int watchdog_errors;
siox_master_lock(sdev->smaster);
watchdog_errors = sdev->watchdog_errors;
siox_master_unlock(sdev->smaster);
return sprintf(buf, "%u\n", watchdog_errors);
}
static DEVICE_ATTR_RO(watchdog_errors);
static struct attribute *siox_device_attrs[] = {
&dev_attr_type.attr,
&dev_attr_inbytes.attr,
&dev_attr_outbytes.attr,
&dev_attr_status_errors.attr,
&dev_attr_connected.attr,
&dev_attr_watchdog.attr,
&dev_attr_watchdog_errors.attr,
NULL
};
ATTRIBUTE_GROUPS(siox_device);
static void siox_device_release(struct device *dev)
{
struct siox_device *sdevice = to_siox_device(dev);
kfree(sdevice);
}
static struct device_type siox_device_type = {
.groups = siox_device_groups,
.release = siox_device_release,
};
static int siox_match(struct device *dev, struct device_driver *drv)
{
if (dev->type != &siox_device_type)
return 0;
/* up to now there is only a single driver so keeping this simple */
return 1;
}
static struct bus_type siox_bus_type = {
.name = "siox",
.match = siox_match,
};
static int siox_driver_probe(struct device *dev)
{
struct siox_driver *sdriver = to_siox_driver(dev->driver);
struct siox_device *sdevice = to_siox_device(dev);
int ret;
ret = sdriver->probe(sdevice);
return ret;
}
static int siox_driver_remove(struct device *dev)
{
struct siox_driver *sdriver =
container_of(dev->driver, struct siox_driver, driver);
struct siox_device *sdevice = to_siox_device(dev);
int ret;
ret = sdriver->remove(sdevice);
return ret;
}
static void siox_driver_shutdown(struct device *dev)
{
struct siox_driver *sdriver =
container_of(dev->driver, struct siox_driver, driver);
struct siox_device *sdevice = to_siox_device(dev);
sdriver->shutdown(sdevice);
}
static ssize_t active_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct siox_master *smaster = to_siox_master(dev);
return sprintf(buf, "%d\n", smaster->active);
}
static ssize_t active_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct siox_master *smaster = to_siox_master(dev);
int ret;
int active;
ret = kstrtoint(buf, 0, &active);
if (ret < 0)
return ret;
if (active)
ret = siox_start(smaster);
else
ret = siox_stop(smaster);
if (ret < 0)
return ret;
return count;
}
static DEVICE_ATTR_RW(active);
static struct siox_device *siox_device_add(struct siox_master *smaster,
const char *type, size_t inbytes,
size_t outbytes, u8 statustype);
static ssize_t device_add_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct siox_master *smaster = to_siox_master(dev);
int ret;
char type[20] = "";
size_t inbytes = 0, outbytes = 0;
u8 statustype = 0;
ret = sscanf(buf, "%20s %zu %zu %hhu", type, &inbytes,
&outbytes, &statustype);
if (ret != 3 && ret != 4)
return -EINVAL;
if (strcmp(type, "siox-12x8") || inbytes != 2 || outbytes != 4)
return -EINVAL;
siox_device_add(smaster, "siox-12x8", inbytes, outbytes, statustype);
return count;
}
static DEVICE_ATTR_WO(device_add);
static void siox_device_remove(struct siox_master *smaster);
static ssize_t device_remove_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct siox_master *smaster = to_siox_master(dev);
/* XXX? require to write <type> <inbytes> <outbytes> */
siox_device_remove(smaster);
return count;
}
static DEVICE_ATTR_WO(device_remove);
static ssize_t poll_interval_ns_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct siox_master *smaster = to_siox_master(dev);
return sprintf(buf, "%lld\n", jiffies_to_nsecs(smaster->poll_interval));
}
static ssize_t poll_interval_ns_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct siox_master *smaster = to_siox_master(dev);
int ret;
u64 val;
ret = kstrtou64(buf, 0, &val);
if (ret < 0)
return ret;
siox_master_lock(smaster);
smaster->poll_interval = nsecs_to_jiffies(val);
siox_master_unlock(smaster);
return count;
}
static DEVICE_ATTR_RW(poll_interval_ns);
static struct attribute *siox_master_attrs[] = {
&dev_attr_active.attr,
&dev_attr_device_add.attr,
&dev_attr_device_remove.attr,
&dev_attr_poll_interval_ns.attr,
NULL
};
ATTRIBUTE_GROUPS(siox_master);
static void siox_master_release(struct device *dev)
{
struct siox_master *smaster = to_siox_master(dev);
kfree(smaster);
}
static struct device_type siox_master_type = {
.groups = siox_master_groups,
.release = siox_master_release,
};
struct siox_master *siox_master_alloc(struct device *dev,
size_t size)
{
struct siox_master *smaster;
if (!dev)
return NULL;
smaster = kzalloc(sizeof(*smaster) + size, GFP_KERNEL);
if (!smaster)
return NULL;
device_initialize(&smaster->dev);
smaster->busno = -1;
smaster->dev.bus = &siox_bus_type;
smaster->dev.type = &siox_master_type;
smaster->dev.parent = dev;
smaster->poll_interval = DIV_ROUND_UP(HZ, 40);
dev_set_drvdata(&smaster->dev, &smaster[1]);
return smaster;
}
EXPORT_SYMBOL_GPL(siox_master_alloc);
int siox_master_register(struct siox_master *smaster)
{
int ret;
if (!siox_is_registered)
return -EPROBE_DEFER;
if (!smaster->pushpull)
return -EINVAL;
dev_set_name(&smaster->dev, "siox-%d", smaster->busno);
smaster->last_poll = jiffies;
smaster->poll_thread = kthread_create(siox_poll_thread, smaster,
"siox-%d", smaster->busno);
if (IS_ERR(smaster->poll_thread)) {
smaster->active = 0;
return PTR_ERR(smaster->poll_thread);
}
mutex_init(&smaster->lock);
INIT_LIST_HEAD(&smaster->devices);
ret = device_add(&smaster->dev);
if (ret)
kthread_stop(smaster->poll_thread);
return ret;
}
EXPORT_SYMBOL_GPL(siox_master_register);
void siox_master_unregister(struct siox_master *smaster)
{
/* remove device */
device_del(&smaster->dev);
siox_master_lock(smaster);
__siox_stop(smaster);
while (smaster->num_devices) {
struct siox_device *sdevice;
sdevice = container_of(smaster->devices.prev,
struct siox_device, node);
list_del(&sdevice->node);
smaster->num_devices--;
siox_master_unlock(smaster);
device_unregister(&sdevice->dev);
siox_master_lock(smaster);
}
siox_master_unlock(smaster);
put_device(&smaster->dev);
}
EXPORT_SYMBOL_GPL(siox_master_unregister);
static struct siox_device *siox_device_add(struct siox_master *smaster,
const char *type, size_t inbytes,
size_t outbytes, u8 statustype)
{
struct siox_device *sdevice;
int ret;
size_t buf_len;
sdevice = kzalloc(sizeof(*sdevice), GFP_KERNEL);
if (!sdevice)
return ERR_PTR(-ENOMEM);
sdevice->type = type;
sdevice->inbytes = inbytes;
sdevice->outbytes = outbytes;
sdevice->statustype = statustype;
sdevice->smaster = smaster;
sdevice->dev.parent = &smaster->dev;
sdevice->dev.bus = &siox_bus_type;
sdevice->dev.type = &siox_device_type;
siox_master_lock(smaster);
dev_set_name(&sdevice->dev, "siox-%d-%d",
smaster->busno, smaster->num_devices);
buf_len = smaster->setbuf_len + inbytes +
smaster->getbuf_len + outbytes;
if (smaster->buf_len < buf_len) {
u8 *buf = krealloc(smaster->buf, buf_len, GFP_KERNEL);
if (!buf) {
dev_err(&smaster->dev,
"failed to realloc buffer to %zu\n", buf_len);
ret = -ENOMEM;
goto err_buf_alloc;
}
smaster->buf_len = buf_len;
smaster->buf = buf;
}
ret = device_register(&sdevice->dev);
if (ret) {
dev_err(&smaster->dev, "failed to register device: %d\n", ret);
goto err_device_register;
}
smaster->num_devices++;
list_add_tail(&sdevice->node, &smaster->devices);
smaster->setbuf_len += sdevice->inbytes;
smaster->getbuf_len += sdevice->outbytes;
sdevice->status_errors_kn = sysfs_get_dirent(sdevice->dev.kobj.sd,
"status_errors");
sdevice->watchdog_kn = sysfs_get_dirent(sdevice->dev.kobj.sd,
"watchdog");
sdevice->watchdog_errors_kn = sysfs_get_dirent(sdevice->dev.kobj.sd,
"watchdog_errors");
sdevice->connected_kn = sysfs_get_dirent(sdevice->dev.kobj.sd,
"connected");
siox_master_unlock(smaster);
return sdevice;
err_device_register:
/* don't care to make the buffer smaller again */
err_buf_alloc:
siox_master_unlock(smaster);
kfree(sdevice);
return ERR_PTR(ret);
}
static void siox_device_remove(struct siox_master *smaster)
{
struct siox_device *sdevice;
siox_master_lock(smaster);
if (!smaster->num_devices) {
siox_master_unlock(smaster);
return;
}
sdevice = container_of(smaster->devices.prev, struct siox_device, node);
list_del(&sdevice->node);
smaster->num_devices--;
smaster->setbuf_len -= sdevice->inbytes;
smaster->getbuf_len -= sdevice->outbytes;
if (!smaster->num_devices)
__siox_stop(smaster);
siox_master_unlock(smaster);
/*
* This must be done without holding the master lock because we're
* called from device_remove_store which also holds a sysfs mutex.
* device_unregister tries to aquire the same lock.
*/
device_unregister(&sdevice->dev);
}
int __siox_driver_register(struct siox_driver *sdriver, struct module *owner)
{
int ret;
if (unlikely(!siox_is_registered))
return -EPROBE_DEFER;
if (!sdriver->set_data && !sdriver->get_data) {
pr_err("Driver %s doesn't provide needed callbacks\n",
sdriver->driver.name);
return -EINVAL;
}
sdriver->driver.owner = owner;
sdriver->driver.bus = &siox_bus_type;
if (sdriver->probe)
sdriver->driver.probe = siox_driver_probe;
if (sdriver->remove)
sdriver->driver.remove = siox_driver_remove;
if (sdriver->shutdown)
sdriver->driver.shutdown = siox_driver_shutdown;
ret = driver_register(&sdriver->driver);
if (ret)
pr_err("Failed to register siox driver %s (%d)\n",
sdriver->driver.name, ret);
return ret;
}
EXPORT_SYMBOL_GPL(__siox_driver_register);
static int __init siox_init(void)
{
int ret;
ret = bus_register(&siox_bus_type);
if (ret) {
pr_err("Registration of SIOX bus type failed: %d\n", ret);
return ret;
}
siox_is_registered = true;
return 0;
}
subsys_initcall(siox_init);
static void __exit siox_exit(void)
{
bus_unregister(&siox_bus_type);
}
module_exit(siox_exit);
MODULE_AUTHOR("Uwe Kleine-Koenig <u.kleine-koenig@pengutronix.de>");
MODULE_DESCRIPTION("Eckelmann SIOX driver core");
MODULE_LICENSE("GPL v2");

49
drivers/siox/siox.h Normal file
View File

@ -0,0 +1,49 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2017 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de>
*/
#include <linux/kernel.h>
#include <linux/kthread.h>
#include <linux/siox.h>
#define to_siox_master(_dev) container_of((_dev), struct siox_master, dev)
struct siox_master {
/* these fields should be initialized by the driver */
int busno;
int (*pushpull)(struct siox_master *smaster,
size_t setbuf_len, const u8 setbuf[],
size_t getbuf_len, u8 getbuf[]);
/* might be initialized by the driver, if 0 it is set to HZ / 40 */
unsigned long poll_interval; /* in jiffies */
/* framework private stuff */
struct mutex lock;
bool active;
struct module *owner;
struct device dev;
unsigned int num_devices;
struct list_head devices;
size_t setbuf_len, getbuf_len;
size_t buf_len;
u8 *buf;
u8 status;
unsigned long last_poll;
struct task_struct *poll_thread;
};
static inline void *siox_master_get_devdata(struct siox_master *smaster)
{
return dev_get_drvdata(&smaster->dev);
}
struct siox_master *siox_master_alloc(struct device *dev, size_t size);
static inline void siox_master_put(struct siox_master *smaster)
{
put_device(&smaster->dev);
}
int siox_master_register(struct siox_master *smaster);
void siox_master_unregister(struct siox_master *smaster);

24
drivers/slimbus/Kconfig Normal file
View File

@ -0,0 +1,24 @@
# SPDX-License-Identifier: GPL-2.0
#
# SLIMbus driver configuration
#
menuconfig SLIMBUS
tristate "SLIMbus support"
help
SLIMbus is standard interface between System-on-Chip and audio codec,
and other peripheral components in typical embedded systems.
If unsure, choose N.
if SLIMBUS
# SLIMbus controllers
config SLIM_QCOM_CTRL
tristate "Qualcomm SLIMbus Manager Component"
depends on SLIMBUS
depends on HAS_IOMEM
help
Select driver if Qualcomm's SLIMbus Manager Component is
programmed using Linux kernel.
endif

10
drivers/slimbus/Makefile Normal file
View File

@ -0,0 +1,10 @@
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for kernel SLIMbus framework.
#
obj-$(CONFIG_SLIMBUS) += slimbus.o
slimbus-y := core.o messaging.o sched.o
#Controllers
obj-$(CONFIG_SLIM_QCOM_CTRL) += slim-qcom-ctrl.o
slim-qcom-ctrl-y := qcom-ctrl.o

480
drivers/slimbus/core.c Normal file
View File

@ -0,0 +1,480 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2011-2017, The Linux Foundation
*/
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/idr.h>
#include <linux/of.h>
#include <linux/pm_runtime.h>
#include <linux/slimbus.h>
#include "slimbus.h"
static DEFINE_IDA(ctrl_ida);
static const struct slim_device_id *slim_match(const struct slim_device_id *id,
const struct slim_device *sbdev)
{
while (id->manf_id != 0 || id->prod_code != 0) {
if (id->manf_id == sbdev->e_addr.manf_id &&
id->prod_code == sbdev->e_addr.prod_code)
return id;
id++;
}
return NULL;
}
static int slim_device_match(struct device *dev, struct device_driver *drv)
{
struct slim_device *sbdev = to_slim_device(dev);
struct slim_driver *sbdrv = to_slim_driver(drv);
return !!slim_match(sbdrv->id_table, sbdev);
}
static int slim_device_probe(struct device *dev)
{
struct slim_device *sbdev = to_slim_device(dev);
struct slim_driver *sbdrv = to_slim_driver(dev->driver);
return sbdrv->probe(sbdev);
}
static int slim_device_remove(struct device *dev)
{
struct slim_device *sbdev = to_slim_device(dev);
struct slim_driver *sbdrv;
if (dev->driver) {
sbdrv = to_slim_driver(dev->driver);
if (sbdrv->remove)
sbdrv->remove(sbdev);
}
return 0;
}
struct bus_type slimbus_bus = {
.name = "slimbus",
.match = slim_device_match,
.probe = slim_device_probe,
.remove = slim_device_remove,
};
EXPORT_SYMBOL_GPL(slimbus_bus);
/*
* __slim_driver_register() - Client driver registration with SLIMbus
*
* @drv:Client driver to be associated with client-device.
* @owner: owning module/driver
*
* This API will register the client driver with the SLIMbus
* It is called from the driver's module-init function.
*/
int __slim_driver_register(struct slim_driver *drv, struct module *owner)
{
/* ID table and probe are mandatory */
if (!drv->id_table || !drv->probe)
return -EINVAL;
drv->driver.bus = &slimbus_bus;
drv->driver.owner = owner;
return driver_register(&drv->driver);
}
EXPORT_SYMBOL_GPL(__slim_driver_register);
/*
* slim_driver_unregister() - Undo effect of slim_driver_register
*
* @drv: Client driver to be unregistered
*/
void slim_driver_unregister(struct slim_driver *drv)
{
driver_unregister(&drv->driver);
}
EXPORT_SYMBOL_GPL(slim_driver_unregister);
static void slim_dev_release(struct device *dev)
{
struct slim_device *sbdev = to_slim_device(dev);
kfree(sbdev);
}
static int slim_add_device(struct slim_controller *ctrl,
struct slim_device *sbdev,
struct device_node *node)
{
sbdev->dev.bus = &slimbus_bus;
sbdev->dev.parent = ctrl->dev;
sbdev->dev.release = slim_dev_release;
sbdev->dev.driver = NULL;
sbdev->ctrl = ctrl;
if (node)
sbdev->dev.of_node = of_node_get(node);
dev_set_name(&sbdev->dev, "%x:%x:%x:%x",
sbdev->e_addr.manf_id,
sbdev->e_addr.prod_code,
sbdev->e_addr.dev_index,
sbdev->e_addr.instance);
return device_register(&sbdev->dev);
}
static struct slim_device *slim_alloc_device(struct slim_controller *ctrl,
struct slim_eaddr *eaddr,
struct device_node *node)
{
struct slim_device *sbdev;
int ret;
sbdev = kzalloc(sizeof(*sbdev), GFP_KERNEL);
if (!sbdev)
return NULL;
sbdev->e_addr = *eaddr;
ret = slim_add_device(ctrl, sbdev, node);
if (ret) {
kfree(sbdev);
return NULL;
}
return sbdev;
}
static void of_register_slim_devices(struct slim_controller *ctrl)
{
struct device *dev = ctrl->dev;
struct device_node *node;
if (!ctrl->dev->of_node)
return;
for_each_child_of_node(ctrl->dev->of_node, node) {
struct slim_device *sbdev;
struct slim_eaddr e_addr;
const char *compat = NULL;
int reg[2], ret;
int manf_id, prod_code;
compat = of_get_property(node, "compatible", NULL);
if (!compat)
continue;
ret = sscanf(compat, "slim%x,%x", &manf_id, &prod_code);
if (ret != 2) {
dev_err(dev, "Manf ID & Product code not found %s\n",
compat);
continue;
}
ret = of_property_read_u32_array(node, "reg", reg, 2);
if (ret) {
dev_err(dev, "Device and Instance id not found:%d\n",
ret);
continue;
}
e_addr.dev_index = reg[0];
e_addr.instance = reg[1];
e_addr.manf_id = manf_id;
e_addr.prod_code = prod_code;
sbdev = slim_alloc_device(ctrl, &e_addr, node);
if (!sbdev)
continue;
}
}
/*
* slim_register_controller() - Controller bring-up and registration.
*
* @ctrl: Controller to be registered.
*
* A controller is registered with the framework using this API.
* If devices on a controller were registered before controller,
* this will make sure that they get probed when controller is up
*/
int slim_register_controller(struct slim_controller *ctrl)
{
int id;
id = ida_simple_get(&ctrl_ida, 0, 0, GFP_KERNEL);
if (id < 0)
return id;
ctrl->id = id;
if (!ctrl->min_cg)
ctrl->min_cg = SLIM_MIN_CLK_GEAR;
if (!ctrl->max_cg)
ctrl->max_cg = SLIM_MAX_CLK_GEAR;
ida_init(&ctrl->laddr_ida);
idr_init(&ctrl->tid_idr);
mutex_init(&ctrl->lock);
mutex_init(&ctrl->sched.m_reconf);
init_completion(&ctrl->sched.pause_comp);
dev_dbg(ctrl->dev, "Bus [%s] registered:dev:%p\n",
ctrl->name, ctrl->dev);
of_register_slim_devices(ctrl);
return 0;
}
EXPORT_SYMBOL_GPL(slim_register_controller);
/* slim_remove_device: Remove the effect of slim_add_device() */
static void slim_remove_device(struct slim_device *sbdev)
{
device_unregister(&sbdev->dev);
}
static int slim_ctrl_remove_device(struct device *dev, void *null)
{
slim_remove_device(to_slim_device(dev));
return 0;
}
/**
* slim_unregister_controller() - Controller tear-down.
*
* @ctrl: Controller to tear-down.
*/
int slim_unregister_controller(struct slim_controller *ctrl)
{
/* Remove all clients */
device_for_each_child(ctrl->dev, NULL, slim_ctrl_remove_device);
/* Enter Clock Pause */
slim_ctrl_clk_pause(ctrl, false, 0);
ida_simple_remove(&ctrl_ida, ctrl->id);
return 0;
}
EXPORT_SYMBOL_GPL(slim_unregister_controller);
static void slim_device_update_status(struct slim_device *sbdev,
enum slim_device_status status)
{
struct slim_driver *sbdrv;
if (sbdev->status == status)
return;
sbdev->status = status;
if (!sbdev->dev.driver)
return;
sbdrv = to_slim_driver(sbdev->dev.driver);
if (sbdrv->device_status)
sbdrv->device_status(sbdev, sbdev->status);
}
/**
* slim_report_absent() - Controller calls this function when a device
* reports absent, OR when the device cannot be communicated with
*
* @sbdev: Device that cannot be reached, or sent report absent
*/
void slim_report_absent(struct slim_device *sbdev)
{
struct slim_controller *ctrl = sbdev->ctrl;
if (!ctrl)
return;
/* invalidate logical addresses */
mutex_lock(&ctrl->lock);
sbdev->is_laddr_valid = false;
mutex_unlock(&ctrl->lock);
ida_simple_remove(&ctrl->laddr_ida, sbdev->laddr);
slim_device_update_status(sbdev, SLIM_DEVICE_STATUS_DOWN);
}
EXPORT_SYMBOL_GPL(slim_report_absent);
static bool slim_eaddr_equal(struct slim_eaddr *a, struct slim_eaddr *b)
{
return (a->manf_id == b->manf_id &&
a->prod_code == b->prod_code &&
a->dev_index == b->dev_index &&
a->instance == b->instance);
}
static int slim_match_dev(struct device *dev, void *data)
{
struct slim_eaddr *e_addr = data;
struct slim_device *sbdev = to_slim_device(dev);
return slim_eaddr_equal(&sbdev->e_addr, e_addr);
}
static struct slim_device *find_slim_device(struct slim_controller *ctrl,
struct slim_eaddr *eaddr)
{
struct slim_device *sbdev;
struct device *dev;
dev = device_find_child(ctrl->dev, eaddr, slim_match_dev);
if (dev) {
sbdev = to_slim_device(dev);
return sbdev;
}
return NULL;
}
/**
* slim_get_device() - get handle to a device.
*
* @ctrl: Controller on which this device will be added/queried
* @e_addr: Enumeration address of the device to be queried
*
* Return: pointer to a device if it has already reported. Creates a new
* device and returns pointer to it if the device has not yet enumerated.
*/
struct slim_device *slim_get_device(struct slim_controller *ctrl,
struct slim_eaddr *e_addr)
{
struct slim_device *sbdev;
sbdev = find_slim_device(ctrl, e_addr);
if (!sbdev) {
sbdev = slim_alloc_device(ctrl, e_addr, NULL);
if (!sbdev)
return ERR_PTR(-ENOMEM);
}
return sbdev;
}
EXPORT_SYMBOL_GPL(slim_get_device);
static int slim_device_alloc_laddr(struct slim_device *sbdev,
bool report_present)
{
struct slim_controller *ctrl = sbdev->ctrl;
u8 laddr;
int ret;
mutex_lock(&ctrl->lock);
if (ctrl->get_laddr) {
ret = ctrl->get_laddr(ctrl, &sbdev->e_addr, &laddr);
if (ret < 0)
goto err;
} else if (report_present) {
ret = ida_simple_get(&ctrl->laddr_ida,
0, SLIM_LA_MANAGER - 1, GFP_KERNEL);
if (ret < 0)
goto err;
laddr = ret;
} else {
ret = -EINVAL;
goto err;
}
if (ctrl->set_laddr) {
ret = ctrl->set_laddr(ctrl, &sbdev->e_addr, laddr);
if (ret) {
ret = -EINVAL;
goto err;
}
}
sbdev->laddr = laddr;
sbdev->is_laddr_valid = true;
slim_device_update_status(sbdev, SLIM_DEVICE_STATUS_UP);
dev_dbg(ctrl->dev, "setting slimbus l-addr:%x, ea:%x,%x,%x,%x\n",
laddr, sbdev->e_addr.manf_id, sbdev->e_addr.prod_code,
sbdev->e_addr.dev_index, sbdev->e_addr.instance);
err:
mutex_unlock(&ctrl->lock);
return ret;
}
/**
* slim_device_report_present() - Report enumerated device.
*
* @ctrl: Controller with which device is enumerated.
* @e_addr: Enumeration address of the device.
* @laddr: Return logical address (if valid flag is false)
*
* Called by controller in response to REPORT_PRESENT. Framework will assign
* a logical address to this enumeration address.
* Function returns -EXFULL to indicate that all logical addresses are already
* taken.
*/
int slim_device_report_present(struct slim_controller *ctrl,
struct slim_eaddr *e_addr, u8 *laddr)
{
struct slim_device *sbdev;
int ret;
ret = pm_runtime_get_sync(ctrl->dev);
if (ctrl->sched.clk_state != SLIM_CLK_ACTIVE) {
dev_err(ctrl->dev, "slim ctrl not active,state:%d, ret:%d\n",
ctrl->sched.clk_state, ret);
goto slimbus_not_active;
}
sbdev = slim_get_device(ctrl, e_addr);
if (IS_ERR(sbdev))
return -ENODEV;
if (sbdev->is_laddr_valid) {
*laddr = sbdev->laddr;
return 0;
}
ret = slim_device_alloc_laddr(sbdev, true);
slimbus_not_active:
pm_runtime_mark_last_busy(ctrl->dev);
pm_runtime_put_autosuspend(ctrl->dev);
return ret;
}
EXPORT_SYMBOL_GPL(slim_device_report_present);
/**
* slim_get_logical_addr() - get/allocate logical address of a SLIMbus device.
*
* @sbdev: client handle requesting the address.
*
* Return: zero if a logical address is valid or a new logical address
* has been assigned. error code in case of error.
*/
int slim_get_logical_addr(struct slim_device *sbdev)
{
if (!sbdev->is_laddr_valid)
return slim_device_alloc_laddr(sbdev, false);
return 0;
}
EXPORT_SYMBOL_GPL(slim_get_logical_addr);
static void __exit slimbus_exit(void)
{
bus_unregister(&slimbus_bus);
}
module_exit(slimbus_exit);
static int __init slimbus_init(void)
{
return bus_register(&slimbus_bus);
}
postcore_initcall(slimbus_init);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("SLIMbus core");

332
drivers/slimbus/messaging.c Normal file
View File

@ -0,0 +1,332 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2011-2017, The Linux Foundation
*/
#include <linux/slab.h>
#include <linux/pm_runtime.h>
#include "slimbus.h"
/**
* slim_msg_response() - Deliver Message response received from a device to the
* framework.
*
* @ctrl: Controller handle
* @reply: Reply received from the device
* @len: Length of the reply
* @tid: Transaction ID received with which framework can associate reply.
*
* Called by controller to inform framework about the response received.
* This helps in making the API asynchronous, and controller-driver doesn't need
* to manage 1 more table other than the one managed by framework mapping TID
* with buffers
*/
void slim_msg_response(struct slim_controller *ctrl, u8 *reply, u8 tid, u8 len)
{
struct slim_msg_txn *txn;
struct slim_val_inf *msg;
unsigned long flags;
spin_lock_irqsave(&ctrl->txn_lock, flags);
txn = idr_find(&ctrl->tid_idr, tid);
if (txn == NULL) {
spin_unlock_irqrestore(&ctrl->txn_lock, flags);
return;
}
msg = txn->msg;
if (msg == NULL || msg->rbuf == NULL) {
dev_err(ctrl->dev, "Got response to invalid TID:%d, len:%d\n",
tid, len);
spin_unlock_irqrestore(&ctrl->txn_lock, flags);
return;
}
idr_remove(&ctrl->tid_idr, tid);
spin_unlock_irqrestore(&ctrl->txn_lock, flags);
memcpy(msg->rbuf, reply, len);
if (txn->comp)
complete(txn->comp);
/* Remove runtime-pm vote now that response was received for TID txn */
pm_runtime_mark_last_busy(ctrl->dev);
pm_runtime_put_autosuspend(ctrl->dev);
}
EXPORT_SYMBOL_GPL(slim_msg_response);
/**
* slim_do_transfer() - Process a SLIMbus-messaging transaction
*
* @ctrl: Controller handle
* @txn: Transaction to be sent over SLIMbus
*
* Called by controller to transmit messaging transactions not dealing with
* Interface/Value elements. (e.g. transmittting a message to assign logical
* address to a slave device
*
* Return: -ETIMEDOUT: If transmission of this message timed out
* (e.g. due to bus lines not being clocked or driven by controller)
*/
int slim_do_transfer(struct slim_controller *ctrl, struct slim_msg_txn *txn)
{
DECLARE_COMPLETION_ONSTACK(done);
bool need_tid = false, clk_pause_msg = false;
unsigned long flags;
int ret, tid, timeout;
/*
* do not vote for runtime-PM if the transactions are part of clock
* pause sequence
*/
if (ctrl->sched.clk_state == SLIM_CLK_ENTERING_PAUSE &&
(txn->mt == SLIM_MSG_MT_CORE &&
txn->mc >= SLIM_MSG_MC_BEGIN_RECONFIGURATION &&
txn->mc <= SLIM_MSG_MC_RECONFIGURE_NOW))
clk_pause_msg = true;
if (!clk_pause_msg) {
ret = pm_runtime_get_sync(ctrl->dev);
if (ctrl->sched.clk_state != SLIM_CLK_ACTIVE) {
dev_err(ctrl->dev, "ctrl wrong state:%d, ret:%d\n",
ctrl->sched.clk_state, ret);
goto slim_xfer_err;
}
}
need_tid = slim_tid_txn(txn->mt, txn->mc);
if (need_tid) {
spin_lock_irqsave(&ctrl->txn_lock, flags);
tid = idr_alloc(&ctrl->tid_idr, txn, 0,
SLIM_MAX_TIDS, GFP_ATOMIC);
txn->tid = tid;
if (!txn->msg->comp)
txn->comp = &done;
else
txn->comp = txn->comp;
spin_unlock_irqrestore(&ctrl->txn_lock, flags);
if (tid < 0)
return tid;
}
ret = ctrl->xfer_msg(ctrl, txn);
if (ret && need_tid && !txn->msg->comp) {
unsigned long ms = txn->rl + HZ;
timeout = wait_for_completion_timeout(txn->comp,
msecs_to_jiffies(ms));
if (!timeout) {
ret = -ETIMEDOUT;
spin_lock_irqsave(&ctrl->txn_lock, flags);
idr_remove(&ctrl->tid_idr, tid);
spin_unlock_irqrestore(&ctrl->txn_lock, flags);
}
}
if (ret)
dev_err(ctrl->dev, "Tx:MT:0x%x, MC:0x%x, LA:0x%x failed:%d\n",
txn->mt, txn->mc, txn->la, ret);
slim_xfer_err:
if (!clk_pause_msg && (!need_tid || ret == -ETIMEDOUT)) {
/*
* remove runtime-pm vote if this was TX only, or
* if there was error during this transaction
*/
pm_runtime_mark_last_busy(ctrl->dev);
pm_runtime_mark_last_busy(ctrl->dev);
}
return ret;
}
EXPORT_SYMBOL_GPL(slim_do_transfer);
static int slim_val_inf_sanity(struct slim_controller *ctrl,
struct slim_val_inf *msg, u8 mc)
{
if (!msg || msg->num_bytes > 16 ||
(msg->start_offset + msg->num_bytes) > 0xC00)
goto reterr;
switch (mc) {
case SLIM_MSG_MC_REQUEST_VALUE:
case SLIM_MSG_MC_REQUEST_INFORMATION:
if (msg->rbuf != NULL)
return 0;
break;
case SLIM_MSG_MC_CHANGE_VALUE:
case SLIM_MSG_MC_CLEAR_INFORMATION:
if (msg->wbuf != NULL)
return 0;
break;
case SLIM_MSG_MC_REQUEST_CHANGE_VALUE:
case SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION:
if (msg->rbuf != NULL && msg->wbuf != NULL)
return 0;
break;
}
reterr:
if (msg)
dev_err(ctrl->dev, "Sanity check failed:msg:offset:0x%x, mc:%d\n",
msg->start_offset, mc);
return -EINVAL;
}
static u16 slim_slicesize(int code)
{
static const u8 sizetocode[16] = {
0, 1, 2, 3, 3, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7
};
clamp(code, 1, (int)ARRAY_SIZE(sizetocode));
return sizetocode[code - 1];
}
/**
* slim_xfer_msg() - Transfer a value info message on slim device
*
* @sbdev: slim device to which this msg has to be transfered
* @msg: value info message pointer
* @mc: message code of the message
*
* Called by drivers which want to transfer a vlaue or info elements.
*
* Return: -ETIMEDOUT: If transmission of this message timed out
*/
int slim_xfer_msg(struct slim_device *sbdev, struct slim_val_inf *msg,
u8 mc)
{
DEFINE_SLIM_LDEST_TXN(txn_stack, mc, 6, sbdev->laddr, msg);
struct slim_msg_txn *txn = &txn_stack;
struct slim_controller *ctrl = sbdev->ctrl;
int ret;
u16 sl;
if (!ctrl)
return -EINVAL;
ret = slim_val_inf_sanity(ctrl, msg, mc);
if (ret)
return ret;
sl = slim_slicesize(msg->num_bytes);
dev_dbg(ctrl->dev, "SB xfer msg:os:%x, len:%d, MC:%x, sl:%x\n",
msg->start_offset, msg->num_bytes, mc, sl);
txn->ec = ((sl | (1 << 3)) | ((msg->start_offset & 0xFFF) << 4));
switch (mc) {
case SLIM_MSG_MC_REQUEST_CHANGE_VALUE:
case SLIM_MSG_MC_CHANGE_VALUE:
case SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION:
case SLIM_MSG_MC_CLEAR_INFORMATION:
txn->rl += msg->num_bytes;
default:
break;
}
if (slim_tid_txn(txn->mt, txn->mc))
txn->rl++;
return slim_do_transfer(ctrl, txn);
}
EXPORT_SYMBOL_GPL(slim_xfer_msg);
static void slim_fill_msg(struct slim_val_inf *msg, u32 addr,
size_t count, u8 *rbuf, u8 *wbuf)
{
msg->start_offset = addr;
msg->num_bytes = count;
msg->rbuf = rbuf;
msg->wbuf = wbuf;
}
/**
* slim_read() - Read SLIMbus value element
*
* @sdev: client handle.
* @addr: address of value element to read.
* @count: number of bytes to read. Maximum bytes allowed are 16.
* @val: will return what the value element value was
*
* Return: -EINVAL for Invalid parameters, -ETIMEDOUT If transmission of
* this message timed out (e.g. due to bus lines not being clocked
* or driven by controller)
*/
int slim_read(struct slim_device *sdev, u32 addr, size_t count, u8 *val)
{
struct slim_val_inf msg;
slim_fill_msg(&msg, addr, count, val, NULL);
return slim_xfer_msg(sdev, &msg, SLIM_MSG_MC_REQUEST_VALUE);
}
EXPORT_SYMBOL_GPL(slim_read);
/**
* slim_readb() - Read byte from SLIMbus value element
*
* @sdev: client handle.
* @addr: address in the value element to read.
*
* Return: byte value of value element.
*/
int slim_readb(struct slim_device *sdev, u32 addr)
{
int ret;
u8 buf;
ret = slim_read(sdev, addr, 1, &buf);
if (ret < 0)
return ret;
else
return buf;
}
EXPORT_SYMBOL_GPL(slim_readb);
/**
* slim_write() - Write SLIMbus value element
*
* @sdev: client handle.
* @addr: address in the value element to write.
* @count: number of bytes to write. Maximum bytes allowed are 16.
* @val: value to write to value element
*
* Return: -EINVAL for Invalid parameters, -ETIMEDOUT If transmission of
* this message timed out (e.g. due to bus lines not being clocked
* or driven by controller)
*/
int slim_write(struct slim_device *sdev, u32 addr, size_t count, u8 *val)
{
struct slim_val_inf msg;
slim_fill_msg(&msg, addr, count, val, NULL);
return slim_xfer_msg(sdev, &msg, SLIM_MSG_MC_CHANGE_VALUE);
}
EXPORT_SYMBOL_GPL(slim_write);
/**
* slim_writeb() - Write byte to SLIMbus value element
*
* @sdev: client handle.
* @addr: address of value element to write.
* @value: value to write to value element
*
* Return: -EINVAL for Invalid parameters, -ETIMEDOUT If transmission of
* this message timed out (e.g. due to bus lines not being clocked
* or driven by controller)
*
*/
int slim_writeb(struct slim_device *sdev, u32 addr, u8 value)
{
return slim_write(sdev, addr, 1, &value);
}
EXPORT_SYMBOL_GPL(slim_writeb);

747
drivers/slimbus/qcom-ctrl.c Normal file
View File

@ -0,0 +1,747 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2011-2017, The Linux Foundation
*/
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/io.h>
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/delay.h>
#include <linux/clk.h>
#include <linux/of.h>
#include <linux/pm_runtime.h>
#include "slimbus.h"
/* Manager registers */
#define MGR_CFG 0x200
#define MGR_STATUS 0x204
#define MGR_INT_EN 0x210
#define MGR_INT_STAT 0x214
#define MGR_INT_CLR 0x218
#define MGR_TX_MSG 0x230
#define MGR_RX_MSG 0x270
#define MGR_IE_STAT 0x2F0
#define MGR_VE_STAT 0x300
#define MGR_CFG_ENABLE 1
/* Framer registers */
#define FRM_CFG 0x400
#define FRM_STAT 0x404
#define FRM_INT_EN 0x410
#define FRM_INT_STAT 0x414
#define FRM_INT_CLR 0x418
#define FRM_WAKEUP 0x41C
#define FRM_CLKCTL_DONE 0x420
#define FRM_IE_STAT 0x430
#define FRM_VE_STAT 0x440
/* Interface registers */
#define INTF_CFG 0x600
#define INTF_STAT 0x604
#define INTF_INT_EN 0x610
#define INTF_INT_STAT 0x614
#define INTF_INT_CLR 0x618
#define INTF_IE_STAT 0x630
#define INTF_VE_STAT 0x640
/* Interrupt status bits */
#define MGR_INT_TX_NACKED_2 BIT(25)
#define MGR_INT_MSG_BUF_CONTE BIT(26)
#define MGR_INT_RX_MSG_RCVD BIT(30)
#define MGR_INT_TX_MSG_SENT BIT(31)
/* Framer config register settings */
#define FRM_ACTIVE 1
#define CLK_GEAR 7
#define ROOT_FREQ 11
#define REF_CLK_GEAR 15
#define INTR_WAKE 19
#define SLIM_MSG_ASM_FIRST_WORD(l, mt, mc, dt, ad) \
((l) | ((mt) << 5) | ((mc) << 8) | ((dt) << 15) | ((ad) << 16))
#define SLIM_ROOT_FREQ 24576000
#define QCOM_SLIM_AUTOSUSPEND 1000
/* MAX message size over control channel */
#define SLIM_MSGQ_BUF_LEN 40
#define QCOM_TX_MSGS 2
#define QCOM_RX_MSGS 8
#define QCOM_BUF_ALLOC_RETRIES 10
#define CFG_PORT(r, v) ((v) ? CFG_PORT_V2(r) : CFG_PORT_V1(r))
/* V2 Component registers */
#define CFG_PORT_V2(r) ((r ## _V2))
#define COMP_CFG_V2 4
#define COMP_TRUST_CFG_V2 0x3000
/* V1 Component registers */
#define CFG_PORT_V1(r) ((r ## _V1))
#define COMP_CFG_V1 0
#define COMP_TRUST_CFG_V1 0x14
/* Resource group info for manager, and non-ported generic device-components */
#define EE_MGR_RSC_GRP (1 << 10)
#define EE_NGD_2 (2 << 6)
#define EE_NGD_1 0
struct slim_ctrl_buf {
void *base;
spinlock_t lock;
int head;
int tail;
int sl_sz;
int n;
};
struct qcom_slim_ctrl {
struct slim_controller ctrl;
struct slim_framer framer;
struct device *dev;
void __iomem *base;
void __iomem *slew_reg;
struct slim_ctrl_buf rx;
struct slim_ctrl_buf tx;
struct completion **wr_comp;
int irq;
struct workqueue_struct *rxwq;
struct work_struct wd;
struct clk *rclk;
struct clk *hclk;
};
static void qcom_slim_queue_tx(struct qcom_slim_ctrl *ctrl, void *buf,
u8 len, u32 tx_reg)
{
int count = (len + 3) >> 2;
__iowrite32_copy(ctrl->base + tx_reg, buf, count);
/* Ensure Oder of subsequent writes */
mb();
}
static void *slim_alloc_rxbuf(struct qcom_slim_ctrl *ctrl)
{
unsigned long flags;
int idx;
spin_lock_irqsave(&ctrl->rx.lock, flags);
if ((ctrl->rx.tail + 1) % ctrl->rx.n == ctrl->rx.head) {
spin_unlock_irqrestore(&ctrl->rx.lock, flags);
dev_err(ctrl->dev, "RX QUEUE full!");
return NULL;
}
idx = ctrl->rx.tail;
ctrl->rx.tail = (ctrl->rx.tail + 1) % ctrl->rx.n;
spin_unlock_irqrestore(&ctrl->rx.lock, flags);
return ctrl->rx.base + (idx * ctrl->rx.sl_sz);
}
static void slim_ack_txn(struct qcom_slim_ctrl *ctrl, int err)
{
struct completion *comp;
unsigned long flags;
int idx;
spin_lock_irqsave(&ctrl->tx.lock, flags);
idx = ctrl->tx.head;
ctrl->tx.head = (ctrl->tx.head + 1) % ctrl->tx.n;
spin_unlock_irqrestore(&ctrl->tx.lock, flags);
comp = ctrl->wr_comp[idx];
ctrl->wr_comp[idx] = NULL;
complete(comp);
}
static irqreturn_t qcom_slim_handle_tx_irq(struct qcom_slim_ctrl *ctrl,
u32 stat)
{
int err = 0;
if (stat & MGR_INT_TX_MSG_SENT)
writel_relaxed(MGR_INT_TX_MSG_SENT,
ctrl->base + MGR_INT_CLR);
if (stat & MGR_INT_TX_NACKED_2) {
u32 mgr_stat = readl_relaxed(ctrl->base + MGR_STATUS);
u32 mgr_ie_stat = readl_relaxed(ctrl->base + MGR_IE_STAT);
u32 frm_stat = readl_relaxed(ctrl->base + FRM_STAT);
u32 frm_cfg = readl_relaxed(ctrl->base + FRM_CFG);
u32 frm_intr_stat = readl_relaxed(ctrl->base + FRM_INT_STAT);
u32 frm_ie_stat = readl_relaxed(ctrl->base + FRM_IE_STAT);
u32 intf_stat = readl_relaxed(ctrl->base + INTF_STAT);
u32 intf_intr_stat = readl_relaxed(ctrl->base + INTF_INT_STAT);
u32 intf_ie_stat = readl_relaxed(ctrl->base + INTF_IE_STAT);
writel_relaxed(MGR_INT_TX_NACKED_2, ctrl->base + MGR_INT_CLR);
dev_err(ctrl->dev, "TX Nack MGR:int:0x%x, stat:0x%x\n",
stat, mgr_stat);
dev_err(ctrl->dev, "TX Nack MGR:ie:0x%x\n", mgr_ie_stat);
dev_err(ctrl->dev, "TX Nack FRM:int:0x%x, stat:0x%x\n",
frm_intr_stat, frm_stat);
dev_err(ctrl->dev, "TX Nack FRM:cfg:0x%x, ie:0x%x\n",
frm_cfg, frm_ie_stat);
dev_err(ctrl->dev, "TX Nack INTF:intr:0x%x, stat:0x%x\n",
intf_intr_stat, intf_stat);
dev_err(ctrl->dev, "TX Nack INTF:ie:0x%x\n",
intf_ie_stat);
err = -ENOTCONN;
}
slim_ack_txn(ctrl, err);
return IRQ_HANDLED;
}
static irqreturn_t qcom_slim_handle_rx_irq(struct qcom_slim_ctrl *ctrl,
u32 stat)
{
u32 *rx_buf, pkt[10];
bool q_rx = false;
u8 mc, mt, len;
pkt[0] = readl_relaxed(ctrl->base + MGR_RX_MSG);
mt = SLIM_HEADER_GET_MT(pkt[0]);
len = SLIM_HEADER_GET_RL(pkt[0]);
mc = SLIM_HEADER_GET_MC(pkt[0]>>8);
/*
* this message cannot be handled by ISR, so
* let work-queue handle it
*/
if (mt == SLIM_MSG_MT_CORE && mc == SLIM_MSG_MC_REPORT_PRESENT) {
rx_buf = (u32 *)slim_alloc_rxbuf(ctrl);
if (!rx_buf) {
dev_err(ctrl->dev, "dropping RX:0x%x due to RX full\n",
pkt[0]);
goto rx_ret_irq;
}
rx_buf[0] = pkt[0];
} else {
rx_buf = pkt;
}
__ioread32_copy(rx_buf + 1, ctrl->base + MGR_RX_MSG + 4,
DIV_ROUND_UP(len, 4));
switch (mc) {
case SLIM_MSG_MC_REPORT_PRESENT:
q_rx = true;
break;
case SLIM_MSG_MC_REPLY_INFORMATION:
case SLIM_MSG_MC_REPLY_VALUE:
slim_msg_response(&ctrl->ctrl, (u8 *)(rx_buf + 1),
(u8)(*rx_buf >> 24), (len - 4));
break;
default:
dev_err(ctrl->dev, "unsupported MC,%x MT:%x\n",
mc, mt);
break;
}
rx_ret_irq:
writel(MGR_INT_RX_MSG_RCVD, ctrl->base +
MGR_INT_CLR);
if (q_rx)
queue_work(ctrl->rxwq, &ctrl->wd);
return IRQ_HANDLED;
}
static irqreturn_t qcom_slim_interrupt(int irq, void *d)
{
struct qcom_slim_ctrl *ctrl = d;
u32 stat = readl_relaxed(ctrl->base + MGR_INT_STAT);
int ret = IRQ_NONE;
if (stat & MGR_INT_TX_MSG_SENT || stat & MGR_INT_TX_NACKED_2)
ret = qcom_slim_handle_tx_irq(ctrl, stat);
if (stat & MGR_INT_RX_MSG_RCVD)
ret = qcom_slim_handle_rx_irq(ctrl, stat);
return ret;
}
static int qcom_clk_pause_wakeup(struct slim_controller *sctrl)
{
struct qcom_slim_ctrl *ctrl = dev_get_drvdata(sctrl->dev);
clk_prepare_enable(ctrl->hclk);
clk_prepare_enable(ctrl->rclk);
enable_irq(ctrl->irq);
writel_relaxed(1, ctrl->base + FRM_WAKEUP);
/* Make sure framer wakeup write goes through before ISR fires */
mb();
/*
* HW Workaround: Currently, slave is reporting lost-sync messages
* after SLIMbus comes out of clock pause.
* Transaction with slave fail before slave reports that message
* Give some time for that report to come
* SLIMbus wakes up in clock gear 10 at 24.576MHz. With each superframe
* being 250 usecs, we wait for 5-10 superframes here to ensure
* we get the message
*/
usleep_range(1250, 2500);
return 0;
}
static void *slim_alloc_txbuf(struct qcom_slim_ctrl *ctrl,
struct slim_msg_txn *txn,
struct completion *done)
{
unsigned long flags;
int idx;
spin_lock_irqsave(&ctrl->tx.lock, flags);
if (((ctrl->tx.head + 1) % ctrl->tx.n) == ctrl->tx.tail) {
spin_unlock_irqrestore(&ctrl->tx.lock, flags);
dev_err(ctrl->dev, "controller TX buf unavailable");
return NULL;
}
idx = ctrl->tx.tail;
ctrl->wr_comp[idx] = done;
ctrl->tx.tail = (ctrl->tx.tail + 1) % ctrl->tx.n;
spin_unlock_irqrestore(&ctrl->tx.lock, flags);
return ctrl->tx.base + (idx * ctrl->tx.sl_sz);
}
static int qcom_xfer_msg(struct slim_controller *sctrl,
struct slim_msg_txn *txn)
{
struct qcom_slim_ctrl *ctrl = dev_get_drvdata(sctrl->dev);
DECLARE_COMPLETION_ONSTACK(done);
void *pbuf = slim_alloc_txbuf(ctrl, txn, &done);
unsigned long ms = txn->rl + HZ;
u8 *puc;
int ret = 0, timeout, retries = QCOM_BUF_ALLOC_RETRIES;
u8 la = txn->la;
u32 *head;
/* HW expects length field to be excluded */
txn->rl--;
/* spin till buffer is made available */
if (!pbuf) {
while (retries--) {
usleep_range(10000, 15000);
pbuf = slim_alloc_txbuf(ctrl, txn, &done);
if (pbuf)
break;
}
}
if (retries < 0 && !pbuf)
return -ENOMEM;
puc = (u8 *)pbuf;
head = (u32 *)pbuf;
if (txn->dt == SLIM_MSG_DEST_LOGICALADDR) {
*head = SLIM_MSG_ASM_FIRST_WORD(txn->rl, txn->mt,
txn->mc, 0, la);
puc += 3;
} else {
*head = SLIM_MSG_ASM_FIRST_WORD(txn->rl, txn->mt,
txn->mc, 1, la);
puc += 2;
}
if (slim_tid_txn(txn->mt, txn->mc))
*(puc++) = txn->tid;
if (slim_ec_txn(txn->mt, txn->mc)) {
*(puc++) = (txn->ec & 0xFF);
*(puc++) = (txn->ec >> 8) & 0xFF;
}
if (txn->msg && txn->msg->wbuf)
memcpy(puc, txn->msg->wbuf, txn->msg->num_bytes);
qcom_slim_queue_tx(ctrl, head, txn->rl, MGR_TX_MSG);
timeout = wait_for_completion_timeout(&done, msecs_to_jiffies(ms));
if (!timeout) {
dev_err(ctrl->dev, "TX timed out:MC:0x%x,mt:0x%x", txn->mc,
txn->mt);
ret = -ETIMEDOUT;
}
return ret;
}
static int qcom_set_laddr(struct slim_controller *sctrl,
struct slim_eaddr *ead, u8 laddr)
{
struct qcom_slim_ctrl *ctrl = dev_get_drvdata(sctrl->dev);
struct {
__be16 manf_id;
__be16 prod_code;
u8 dev_index;
u8 instance;
u8 laddr;
} __packed p;
struct slim_val_inf msg = {0};
DEFINE_SLIM_EDEST_TXN(txn, SLIM_MSG_MC_ASSIGN_LOGICAL_ADDRESS,
10, laddr, &msg);
int ret;
p.manf_id = cpu_to_be16(ead->manf_id);
p.prod_code = cpu_to_be16(ead->prod_code);
p.dev_index = ead->dev_index;
p.instance = ead->instance;
p.laddr = laddr;
msg.wbuf = (void *)&p;
msg.num_bytes = 7;
ret = slim_do_transfer(&ctrl->ctrl, &txn);
if (ret)
dev_err(ctrl->dev, "set LA:0x%x failed:ret:%d\n",
laddr, ret);
return ret;
}
static int slim_get_current_rxbuf(struct qcom_slim_ctrl *ctrl, void *buf)
{
unsigned long flags;
spin_lock_irqsave(&ctrl->rx.lock, flags);
if (ctrl->rx.tail == ctrl->rx.head) {
spin_unlock_irqrestore(&ctrl->rx.lock, flags);
return -ENODATA;
}
memcpy(buf, ctrl->rx.base + (ctrl->rx.head * ctrl->rx.sl_sz),
ctrl->rx.sl_sz);
ctrl->rx.head = (ctrl->rx.head + 1) % ctrl->rx.n;
spin_unlock_irqrestore(&ctrl->rx.lock, flags);
return 0;
}
static void qcom_slim_rxwq(struct work_struct *work)
{
u8 buf[SLIM_MSGQ_BUF_LEN];
u8 mc, mt, len;
int ret;
struct qcom_slim_ctrl *ctrl = container_of(work, struct qcom_slim_ctrl,
wd);
while ((slim_get_current_rxbuf(ctrl, buf)) != -ENODATA) {
len = SLIM_HEADER_GET_RL(buf[0]);
mt = SLIM_HEADER_GET_MT(buf[0]);
mc = SLIM_HEADER_GET_MC(buf[1]);
if (mt == SLIM_MSG_MT_CORE &&
mc == SLIM_MSG_MC_REPORT_PRESENT) {
struct slim_eaddr ea;
u8 laddr;
ea.manf_id = be16_to_cpup((__be16 *)&buf[2]);
ea.prod_code = be16_to_cpup((__be16 *)&buf[4]);
ea.dev_index = buf[6];
ea.instance = buf[7];
ret = slim_device_report_present(&ctrl->ctrl, &ea,
&laddr);
if (ret < 0)
dev_err(ctrl->dev, "assign laddr failed:%d\n",
ret);
} else {
dev_err(ctrl->dev, "unexpected message:mc:%x, mt:%x\n",
mc, mt);
}
}
}
static void qcom_slim_prg_slew(struct platform_device *pdev,
struct qcom_slim_ctrl *ctrl)
{
struct resource *slew_mem;
if (!ctrl->slew_reg) {
/* SLEW RATE register for this SLIMbus */
slew_mem = platform_get_resource_byname(pdev, IORESOURCE_MEM,
"slew");
ctrl->slew_reg = devm_ioremap(&pdev->dev, slew_mem->start,
resource_size(slew_mem));
if (!ctrl->slew_reg)
return;
}
writel_relaxed(1, ctrl->slew_reg);
/* Make sure SLIMbus-slew rate enabling goes through */
wmb();
}
static int qcom_slim_probe(struct platform_device *pdev)
{
struct qcom_slim_ctrl *ctrl;
struct slim_controller *sctrl;
struct resource *slim_mem;
int ret, ver;
ctrl = devm_kzalloc(&pdev->dev, sizeof(*ctrl), GFP_KERNEL);
if (!ctrl)
return -ENOMEM;
ctrl->hclk = devm_clk_get(&pdev->dev, "iface");
if (IS_ERR(ctrl->hclk))
return PTR_ERR(ctrl->hclk);
ctrl->rclk = devm_clk_get(&pdev->dev, "core");
if (IS_ERR(ctrl->rclk))
return PTR_ERR(ctrl->rclk);
ret = clk_set_rate(ctrl->rclk, SLIM_ROOT_FREQ);
if (ret) {
dev_err(&pdev->dev, "ref-clock set-rate failed:%d\n", ret);
return ret;
}
ctrl->irq = platform_get_irq(pdev, 0);
if (!ctrl->irq) {
dev_err(&pdev->dev, "no slimbus IRQ\n");
return -ENODEV;
}
sctrl = &ctrl->ctrl;
sctrl->dev = &pdev->dev;
ctrl->dev = &pdev->dev;
platform_set_drvdata(pdev, ctrl);
dev_set_drvdata(ctrl->dev, ctrl);
slim_mem = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl");
ctrl->base = devm_ioremap_resource(ctrl->dev, slim_mem);
if (IS_ERR(ctrl->base)) {
dev_err(&pdev->dev, "IOremap failed\n");
return PTR_ERR(ctrl->base);
}
sctrl->set_laddr = qcom_set_laddr;
sctrl->xfer_msg = qcom_xfer_msg;
sctrl->wakeup = qcom_clk_pause_wakeup;
ctrl->tx.n = QCOM_TX_MSGS;
ctrl->tx.sl_sz = SLIM_MSGQ_BUF_LEN;
ctrl->rx.n = QCOM_RX_MSGS;
ctrl->rx.sl_sz = SLIM_MSGQ_BUF_LEN;
ctrl->wr_comp = kzalloc(sizeof(struct completion *) * QCOM_TX_MSGS,
GFP_KERNEL);
if (!ctrl->wr_comp)
return -ENOMEM;
spin_lock_init(&ctrl->rx.lock);
spin_lock_init(&ctrl->tx.lock);
INIT_WORK(&ctrl->wd, qcom_slim_rxwq);
ctrl->rxwq = create_singlethread_workqueue("qcom_slim_rx");
if (!ctrl->rxwq) {
dev_err(ctrl->dev, "Failed to start Rx WQ\n");
return -ENOMEM;
}
ctrl->framer.rootfreq = SLIM_ROOT_FREQ / 8;
ctrl->framer.superfreq =
ctrl->framer.rootfreq / SLIM_CL_PER_SUPERFRAME_DIV8;
sctrl->a_framer = &ctrl->framer;
sctrl->clkgear = SLIM_MAX_CLK_GEAR;
qcom_slim_prg_slew(pdev, ctrl);
ret = devm_request_irq(&pdev->dev, ctrl->irq, qcom_slim_interrupt,
IRQF_TRIGGER_HIGH, "qcom_slim_irq", ctrl);
if (ret) {
dev_err(&pdev->dev, "request IRQ failed\n");
goto err_request_irq_failed;
}
ret = clk_prepare_enable(ctrl->hclk);
if (ret)
goto err_hclk_enable_failed;
ret = clk_prepare_enable(ctrl->rclk);
if (ret)
goto err_rclk_enable_failed;
ctrl->tx.base = devm_kcalloc(&pdev->dev, ctrl->tx.n, ctrl->tx.sl_sz,
GFP_KERNEL);
if (!ctrl->tx.base) {
ret = -ENOMEM;
goto err;
}
ctrl->rx.base = devm_kcalloc(&pdev->dev,ctrl->rx.n, ctrl->rx.sl_sz,
GFP_KERNEL);
if (!ctrl->rx.base) {
ret = -ENOMEM;
goto err;
}
/* Register with framework before enabling frame, clock */
ret = slim_register_controller(&ctrl->ctrl);
if (ret) {
dev_err(ctrl->dev, "error adding controller\n");
goto err;
}
ver = readl_relaxed(ctrl->base);
/* Version info in 16 MSbits */
ver >>= 16;
/* Component register initialization */
writel(1, ctrl->base + CFG_PORT(COMP_CFG, ver));
writel((EE_MGR_RSC_GRP | EE_NGD_2 | EE_NGD_1),
ctrl->base + CFG_PORT(COMP_TRUST_CFG, ver));
writel((MGR_INT_TX_NACKED_2 |
MGR_INT_MSG_BUF_CONTE | MGR_INT_RX_MSG_RCVD |
MGR_INT_TX_MSG_SENT), ctrl->base + MGR_INT_EN);
writel(1, ctrl->base + MGR_CFG);
/* Framer register initialization */
writel((1 << INTR_WAKE) | (0xA << REF_CLK_GEAR) |
(0xA << CLK_GEAR) | (1 << ROOT_FREQ) | (1 << FRM_ACTIVE) | 1,
ctrl->base + FRM_CFG);
writel(MGR_CFG_ENABLE, ctrl->base + MGR_CFG);
writel(1, ctrl->base + INTF_CFG);
writel(1, ctrl->base + CFG_PORT(COMP_CFG, ver));
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_set_autosuspend_delay(&pdev->dev, QCOM_SLIM_AUTOSUSPEND);
pm_runtime_set_active(&pdev->dev);
pm_runtime_mark_last_busy(&pdev->dev);
pm_runtime_enable(&pdev->dev);
dev_dbg(ctrl->dev, "QCOM SB controller is up:ver:0x%x!\n", ver);
return 0;
err:
clk_disable_unprepare(ctrl->rclk);
err_rclk_enable_failed:
clk_disable_unprepare(ctrl->hclk);
err_hclk_enable_failed:
err_request_irq_failed:
destroy_workqueue(ctrl->rxwq);
return ret;
}
static int qcom_slim_remove(struct platform_device *pdev)
{
struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev);
pm_runtime_disable(&pdev->dev);
slim_unregister_controller(&ctrl->ctrl);
destroy_workqueue(ctrl->rxwq);
return 0;
}
/*
* If PM_RUNTIME is not defined, these 2 functions become helper
* functions to be called from system suspend/resume.
*/
#ifdef CONFIG_PM
static int qcom_slim_runtime_suspend(struct device *device)
{
struct platform_device *pdev = to_platform_device(device);
struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev);
int ret;
dev_dbg(device, "pm_runtime: suspending...\n");
ret = slim_ctrl_clk_pause(&ctrl->ctrl, false, SLIM_CLK_UNSPECIFIED);
if (ret) {
dev_err(device, "clk pause not entered:%d", ret);
} else {
disable_irq(ctrl->irq);
clk_disable_unprepare(ctrl->hclk);
clk_disable_unprepare(ctrl->rclk);
}
return ret;
}
static int qcom_slim_runtime_resume(struct device *device)
{
struct platform_device *pdev = to_platform_device(device);
struct qcom_slim_ctrl *ctrl = platform_get_drvdata(pdev);
int ret = 0;
dev_dbg(device, "pm_runtime: resuming...\n");
ret = slim_ctrl_clk_pause(&ctrl->ctrl, true, 0);
if (ret)
dev_err(device, "clk pause not exited:%d", ret);
return ret;
}
#endif
#ifdef CONFIG_PM_SLEEP
static int qcom_slim_suspend(struct device *dev)
{
int ret = 0;
if (!pm_runtime_enabled(dev) ||
(!pm_runtime_suspended(dev))) {
dev_dbg(dev, "system suspend");
ret = qcom_slim_runtime_suspend(dev);
}
return ret;
}
static int qcom_slim_resume(struct device *dev)
{
if (!pm_runtime_enabled(dev) || !pm_runtime_suspended(dev)) {
int ret;
dev_dbg(dev, "system resume");
ret = qcom_slim_runtime_resume(dev);
if (!ret) {
pm_runtime_mark_last_busy(dev);
pm_request_autosuspend(dev);
}
return ret;
}
return 0;
}
#endif /* CONFIG_PM_SLEEP */
static const struct dev_pm_ops qcom_slim_dev_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(qcom_slim_suspend, qcom_slim_resume)
SET_RUNTIME_PM_OPS(
qcom_slim_runtime_suspend,
qcom_slim_runtime_resume,
NULL
)
};
static const struct of_device_id qcom_slim_dt_match[] = {
{ .compatible = "qcom,slim", },
{ .compatible = "qcom,apq8064-slim", },
{}
};
static struct platform_driver qcom_slim_driver = {
.probe = qcom_slim_probe,
.remove = qcom_slim_remove,
.driver = {
.name = "qcom_slim_ctrl",
.of_match_table = qcom_slim_dt_match,
.pm = &qcom_slim_dev_pm_ops,
},
};
module_platform_driver(qcom_slim_driver);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Qualcomm SLIMbus Controller");

Some files were not shown because too many files have changed in this diff Show More