The slave_id was previously used to pick one DMA slave instead of another,
but this is now done through the DMA descriptors in device tree.
For the qcom_adm driver, the configuration is documented in the DT
binding to contain a tuple of device identifier and a "crci" field,
but the implementation ends up using only a single cell for identifying
the slave, with the crci getting passed in nonstandard properties of
the device, and passed through the dma driver using the old slave_id
field. Part of the problem apparently is that the nand driver ends up
using only a single DMA request ID, but requires distinct values for
"crci" depending on the type of transfer.
Change both the dmaengine driver and the two slave drivers to allow
the documented binding to work in addition to the ad-hoc passing
of crci values. In order to no longer abuse the slave_id field, pass
the data using the "peripheral_config" mechanism instead.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211122222203.4103644-9-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
It appears that the code that reads the slave_id from the channel config
was copied incorrectly from other drivers. Nothing ever sets this field
on platforms that use this driver, so remove the reference.
Reviewed-by: Baolin Wang <baolin.wang7@gmail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211122222203.4103644-8-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The last driver referencing the slave_id on Marvell PXA and MMP platforms
was the SPI driver, but this stopped doing so a long time ago, so the
TODO from the earlier patch can no be removed.
Fixes: b729bf34535e ("spi/pxa2xx: Don't use slave_id of dma_slave_config")
Fixes: 13b3006b8ebd ("dma: mmp_pdma: add filter function")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211122222203.4103644-7-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The slave device is picked through either devicetree or a filter
function, and any remaining out-of-tree drivers would have warned
about this usage since 2015.
Stop interpreting the field finally so it can be removed from
the interface.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211122222203.4103644-6-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Nothing sets the slave_id field any more, so stop accessing
it to allow the removal of this field.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20211122222203.4103644-11-arnd@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
There is no reason to walk the MSI descriptors to retrieve the interrupt
number for a device. Use msi_get_virq() instead.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Acked-by: Sinan Kaya <okaya@kernel.org>
Acked-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20211210221815.329792721@linutronix.de
Storing a pointer to the MSI descriptor just to keep track of the Linux
interrupt number is daft. Use msi_get_virq() instead.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20211210221814.970099984@linutronix.de
Use the common msi_index member and get rid of the pointless wrapper struct.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Nishanth Menon <nm@ti.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/20211210221814.413638645@linutronix.de
The only unconditional part of MSI data in struct device is the irqdomain
pointer. Everything else can be allocated on demand. Create a data
structure and move the irqdomain pointer into it. The other MSI specific
parts are going to be removed from struct device in later steps.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Michael Kelley <mikelley@microsoft.com>
Tested-by: Nishanth Menon <nm@ti.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20211210221813.617178827@linutronix.de
Ming reported that with the abort path of the descriptor submission, there
can be a window where a completed descriptor can be missed to be completed
by the irq completion thread:
CPU A CPU B
Submit (successful)
Submit (fail)
irq_process_work_list() // empty
llist_abort_desc()
// remove all descs from pending list
irq_process_pending_llist() // empty
exit idxd_wq_thread() with no processing
Add opportunistic descriptor completion in the abort path in order to
remove the missed completion.
Fixes: 6b4b87f2c31a ("dmaengine: idxd: fix submission race window")
Reported-by: Ming Li <ming4.li@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163898288714.443911.16084982766671976640.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Smatch reports below warnings [1] wrt dereferencing rm_res when it can
potentially be ERR_PTR(). This is possible when entire range is
allocated to Linux
Fix this case by making sure, there is no deference of rm_res when its
ERR_PTR().
[1]:
drivers/dma/ti/k3-udma.c:4524 udma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4537 udma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4681 bcdma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4696 bcdma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4711 bcdma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4848 pktdma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
drivers/dma/ti/k3-udma.c:4861 pktdma_setup_resources() error: 'rm_res' dereferencing possible ERR_PTR()
Reported-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Vignesh Raghavendra <vigneshr@ti.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Link: https://lore.kernel.org/r/20211209180957.29036-1-vigneshr@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The variable used for returning status in
`ppc440spe_adma_dma2rxor_prep_src' function is never changed
and this function just need to return 0. Thus, the `rval' can
be removed and return 0 from `ppc440spe_adma_dma2rxor_prep_src'.
Signed-off-by: Jason Wang <wangborong@cdjrlc.com>
Link: https://lore.kernel.org/r/20211114060856.239314-1-wangborong@cdjrlc.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Dan reports that smatch has found idxd_wq_quiesce() is being called inside
the idxd->dev_lock. idxd_wq_quiesce() calls wait_for_completion() and
therefore it can sleep. Move the call outside of the spinlock as it does
not need device lock.
Fixes: 5b0c68c473a1 ("dmaengine: idxd: support reporting of halt interrupt")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163716858508.1721911.15051495873516709923.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The commit in the Fixes: tag has changed the logic of the code and now it
is likely that the probe will return an early success (0), even if not
completely executed.
This should lead to a crash or similar issue later on when the code
accesses to some never allocated resources.
Change the '!err' into a 'err' when checking if
'dma_set_mask_and_coherent()' has failed or not.
While at it, simplify the code and remove the "can't success code" related
to 32 DMA mask.
As stated in [1], 'dma_set_mask_and_coherent(DMA_BIT_MASK(64))' can't fail
if 'dev->dma_mask' is non-NULL. And if it is NULL, it would fail for the
same reason when tried with DMA_BIT_MASK(32).
[1]: https://lkml.org/lkml/2021/6/7/398
Fixes: ecb8c88bd31c ("dmaengine: dw-edma-pcie: switch from 'pci_' to 'dma_' API")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Link: https://lore.kernel.org/r/935fbb40ae930c5fe87482a41dcb73abf2257973.1636492127.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This new CDMA binding for device_prep_dma_memcpy_sg was partially borrowed
from xlnx kernel tree, an expanded with extended address space support when
linking descriptor segments and checking for incorrect zero transfer size.
Signed-off-by: Adrian Larumbe <adrianml@alumnos.upm.es>
Link: https://lore.kernel.org/r/20211101180825.241048-4-adrianml@alumnos.upm.es
Signed-off-by: Vinod Koul <vkoul@kernel.org>
This is the old DMA_SG interface that was removed in commit
c678fa66341c ("dmaengine: remove DMA_SG as it is dead code in kernel"). It
has been renamed to DMA_MEMCPY_SG to better match the MEMSET and MEMSET_SG
naming convention.
It should only be used for mem2mem copies, either main system memory or
CPU-addressable device memory (like video memory on a PCI graphics card).
Bringing back this interface was prompted by the need to use the Xilinx
CDMA device for mem2mem SG transfers.
Signed-off-by: Adrian Larumbe <adrianml@alumnos.upm.es>
Link: https://lore.kernel.org/r/20211101180825.241048-3-adrianml@alumnos.upm.es
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Coverity complains of an uninitialized variable:
5. uninit_use_in_call: Using uninitialized value config.dst_per when calling axi_chan_config_write. [show details]
6. uninit_use_in_call: Using uninitialized value config.hs_sel_src when calling axi_chan_config_write. [show details]
CID 121164 (#1-3 of 3): Uninitialized scalar variable (UNINIT)
7. uninit_use_in_call: Using uninitialized value config.src_per when calling axi_chan_config_write. [show details]
418 axi_chan_config_write(chan, &config);
Fix this by initializing the structure to 0 which should at least be benign in axi_chan_config_write(). Also fix
what looks like a cut-n-paste error when initializing config.hs_sel_dst.
Fixes: 824351668a413 ("dmaengine: dw-axi-dmac: support DMAX_NUM_CHANNELS > 8")
Cc: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: dmaengine@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
Link: https://lore.kernel.org/r/20211025181656.31658-1-tim.gardner@canonical.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
"Interrupt handle revoked" is an event that happens when the driver is
running on a guest kernel and the VM is migrated to a new machine.
The device will trigger an interrupt that signals to the guest driver
that the interrupt handles need to be replaced.
The misc irq thread function calls a helper function to handle the
event. The function uses the WQ percpu_ref to quiesce the kernel
submissions. It then replaces the interrupt handles by requesting
interrupt handle command for each I/O MSIX vector. Once the handle is
updated, the driver will unblock the submission path to allow new
submissions.
The submitter will attempt to acquire a percpu_ref before submission. When
the request fails, it will wait on the wq_resurrect 'completion'.
The driver does anticipate the possibility of descriptors being submitted
before the WQ percpu_ref is killed. If a descriptor has already been
submitted, it will return with incorrect interrupt handle status. The
descriptor will be re-submitted with the new interrupt handle on the
completion path. For descriptors with incorrect interrupt handles,
completion interrupt won't be triggered.
At the completion of the interrupt handle refresh, the handling function
will call idxd_int_handle_refresh_drain() to issue drain descriptors to
each of the wq with associated interrupt handle. The drain descriptor will have
interrupt request set but without completion record. This will ensure all
descriptors with incorrect interrupt completion handle get drained and
a completion interrupt is triggered for the guest driver to process them.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Co-Developed-by: Sanjay Kumar <sanjay.k.kumar@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528420189.3925689.18212568593220415551.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Handle a descriptor that has been marked with invalid interrupt handle
error in status. Create a work item that will resubmit the descriptor. This
typically happens when the driver has handled the revoke interrupt handle
event and has a new interrupt handle.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528419601.3925689.4166517602890523193.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add a locked version of idxd_quiesce() call so that the quiesce can be
called with a lock in situations where the lock is not held by the caller.
In the driver probe/remove path, the lock is already held, so the raw
version can be called w/o locking.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528418980.3925689.5841907054957931211.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The helper is called at the completion of the interrupt handle refresh
event. It issues drain descriptors to each of the wq with associated
interrupt handle. The drain descriptor will have interrupt request set but
without completion record. This will ensure all descriptors with incorrect
interrupt completion handle get drained and a completion interrupt is
triggered for the guest driver to process them.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528418315.3925689.7944718440052849626.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In preparation of supporting interrupt handle revoke event, move the
interrupt handle assignment to right before the descriptor to be submitted.
This allows the interrupt handle revoke logic to assign the latest
interrupt handle on submission.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528417767.3925689.7730411152122952808.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Attach int_handle to irq_entry. This removes the separate management of int
handles and reduces the confusion of interating through int handles that is
off by 1 count.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528417065.3925689.11505755433684476288.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Refactor the completion function to allow skipping of descriptor freeing on
the submission failure path. This completely removes descriptor freeing
from the submit failure path and leave the responsibility to the caller.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163528416222.3925689.12859769271667814762.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Updates:
- Another pile of idxd updates
- pm routines cleanup for at_xdmac driver
- Correct handling of callback_result for few drivers
- zynqmp_dma driver updates and descriptor management refinement
- Hardware handshaking support for dw-axi-dmac
- Support for remotely powered controllers in Qcom bam dma
- tegra driver updates
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE+vs47OPLdNbVcHzyfBQHDyUjg0cFAmGLtLMACgkQfBQHDyUj
g0dZKA/+P+OiIoQ3SHvEARL1JCIVKB1aWfIq4RIwQLcEInfL+P461csjld0hBAAJ
oHZxVhE5PGKJ2qgZWyrhBe+5pX1RZm3U5rkNCA/qvEWQPJPukTP2Wvo28uDmCadz
PjQ/KfX/OPf+uYiKxxPDwgoFeWbNX0hTIjRqxd0/roclcOLvNGbpDj8KEBYvvh0/
kzpmn3SMoA/ak3Y7e3AUHQGN8hDEtncETzNojkF8KcNwu3LYjgWqd+PPj7VQYJ8c
NCaen7iPHYS8ZstgPU4bXaET/yuByv9PLm4Mw11R6y8dUdL/BjD9RRM6KL81dZvS
/dxtBdequpCod+2Uf0/OOlxUsPYaGyJ3M5NSNAx8Iqz7CX4UkiiQyC5Px5PfBkC/
BSCxP2D3E9BQ7k/uKO0pDXVpQoLiAGKZAPoONApG3/Y/hgwi2VifmoOte1nvzJx8
6AGngwHbSyUq8E3M2mIRhpO+3sH58f8cRLOqrxSHxWGq2C9yI3oFqh4n7jW1pDUh
8DEDTV4JtNF5gxuDi9tM6dqJ5cZvdcQ5ITRjXj4CjTMdd4P4D7hJzxR3OyKyUFiQ
DqtW9UL0vJdufdE04IL29V5p746XkMe++o0Op912kDUIGSH8vuR9GMs9qvMriysw
6PZ+DOzlMfdkWc654A8W4vcmqY5JF5xOM00SYznGnruUu/RCNh4=
=eqOs
-----END PGP SIGNATURE-----
Merge tag 'dmaengine-5.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine
Pull dmaengine updates from Vinod Koul:
"A bunch of driver updates, no new driver or controller support this
time though:
- Another pile of idxd updates
- pm routines cleanup for at_xdmac driver
- Correct handling of callback_result for few drivers
- zynqmp_dma driver updates and descriptor management refinement
- Hardware handshaking support for dw-axi-dmac
- Support for remotely powered controllers in Qcom bam dma
- tegra driver updates"
* tag 'dmaengine-5.16-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (69 commits)
dmaengine: ti: k3-udma: Set r/tchan or rflow to NULL if request fail
dmaengine: ti: k3-udma: Set bchan to NULL if a channel request fail
dmaengine: stm32-dma: avoid 64-bit division in stm32_dma_get_max_width
dmaengine: fsl-edma: support edma memcpy
dmaengine: idxd: fix resource leak on dmaengine driver disable
dmaengine: idxd: cleanup completion record allocation
dmaengine: zynqmp_dma: Correctly handle descriptor callbacks
dmaengine: xilinx_dma: Correctly handle cyclic descriptor callbacks
dmaengine: altera-msgdma: Correctly handle descriptor callbacks
dmaengine: at_xdmac: fix compilation warning
dmaengine: dw-axi-dmac: Simplify assignment in dma_chan_pause()
dmaengine: qcom: bam_dma: Add "powered remotely" mode
dt-bindings: dmaengine: bam_dma: Add "powered remotely" mode
dmaengine: sa11x0: Mark PM functions as __maybe_unused
dmaengine: switch from 'pci_' to 'dma_' API
dmaengine: ioat: switch from 'pci_' to 'dma_' API
dmaengine: hsu: switch from 'pci_' to 'dma_' API
dmaengine: hisi_dma: switch from 'pci_' to 'dma_' API
dmaengine: dw: switch from 'pci_' to 'dma_' API
dmaengine: dw-edma-pcie: switch from 'pci_' to 'dma_' API
...
udma_get_*() checks if rchan/tchan/rflow is already allocated by checking
if it has a NON NULL value. For the error cases, rchan/tchan/rflow will
have error value and udma_get_*() considers this as already allocated
(PASS) since the error values are NON NULL. This results in NULL pointer
dereference error while de-referencing rchan/tchan/rflow.
Reset the value of rchan/tchan/rflow to NULL if a channel request fails.
CC: stable@vger.kernel.org
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Link: https://lore.kernel.org/r/20211031032411.27235-3-kishon@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
bcdma_get_*() checks if bchan is already allocated by checking if it
has a NON NULL value. For the error cases, bchan will have error value
and bcdma_get_*() considers this as already allocated (PASS) since the
error values are NON NULL. This results in NULL pointer dereference
error while de-referencing bchan.
Reset the value of bchan to NULL if a channel request fails.
CC: stable@vger.kernel.org
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Link: https://lore.kernel.org/r/20211031032411.27235-2-kishon@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Using the % operator on a 64-bit variable is expensive and can
cause a link failure:
arm-linux-gnueabi-ld: drivers/dma/stm32-dma.o: in function `stm32_dma_get_max_width':
stm32-dma.c:(.text+0x170): undefined reference to `__aeabi_uldivmod'
arm-linux-gnueabi-ld: drivers/dma/stm32-dma.o: in function `stm32_dma_set_xfer_param':
stm32-dma.c:(.text+0x1cd4): undefined reference to `__aeabi_uldivmod'
As we know that we just want to check the alignment in
stm32_dma_get_max_width(), there is no need for a full division, and
using a simple mask is a faster replacement.
Same in stm32_dma_set_xfer_param(), change this to only allow burst
transfers if the address is a multiple of the length.
stm32_dma_get_best_burst just after will take buf_len into account to fix
burst in case of misalignment.
Fixes: b20fd5fa310c ("dmaengine: stm32-dma: fix stm32_dma_get_max_width")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20211103153312.41483-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Add memcpy in edma. The edma has the capability to transfer data by
software trigger so that it could be used for memory copy. Enable
MEMCPY for edma driver and it could be test directly by dmatest.
Signed-off-by: Joy Zou <joy.zou@nxp.com>
Link: https://lore.kernel.org/r/20211026090025.2777292-1-joy.zou@nxp.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wq resources needs to be released before the kernel type is reset by
__drv_disable_wq(). With dma channels unregistered and wq quiesced, all the
wq resources for dmaengine can be freed. There is no need to wait until wq
is disabled. With the wq->type being reset to "unknown", the driver is
skipping the freeing of the resources.
Fixes: 0cda4f6986a3 ("dmaengine: idxd: create dmaengine driver for wq 'device'")
Reported-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Tested-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163517405099.3484556.12521975053711345244.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
According to core-api/dma-api-howto.rst, the address from
dma_alloc_coherent is gauranteed to align to the smallest PAGE_SIZE order.
That supercedes the 64B/32B alignment requirement of the completion record.
Remove alignment adjustment code.
Tested-by: Jacob Pan <jacob.jun.pan@linux.intel.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Link: https://lore.kernel.org/r/163517396063.3484297.7494385225280705372.stgit@djiang5-desk3.ch.intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
DMA clients can provide one of two types of callbacks. For this reason
dmaengine drivers should not directly invoke `callback`, but always use
`dmaengine_desc_callback_invoke()`. This makes sure that both types of
callbacks are handled correctly.
The zynqmp_dma driver currently doesn't do this and only handles the
`callback` type callback. If the client used the `callback_result` type
callback it will not be called.
Fix this by switching to `dmaengine_desc_callback_valid()` and
`dmaengine_desc_callback_invoke()`.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Link: https://lore.kernel.org/r/20211025075428.2094-3-lars@metafoo.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
DMA clients can provide one of two types of callbacks. For this reason
dmaengine drivers should not directly invoke `callback`, but always use
`dmaengine_desc_callback_invoke()`. This makes sure that both types of
callbacks are handled correctly.
The xilinx_dma driver currently doesn't do this for cyclic descriptors and
only handles the `callback` type callback. If the client used the
`callback_result` type callback it will not be called.
Fix this by switching to `dmaengine_desc_callback_valid()` and
`dmaengine_desc_callback_invoke()`.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Link: https://lore.kernel.org/r/20211025075428.2094-2-lars@metafoo.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
DMA clients can provide one of two types of callbacks. For this reason
dmaengine drivers should not directly invoke `callback`, but always use
dmaengine_desc_callback_invoke(). This makes sure that both types of
callbacks are handled correctly.
The altera-msgdma driver currently doesn't do this and only handles the
`callback` type callback. If the client used the `callback_result` type
callback it will not be called.
Fix this by switching to `dmaengine_desc_callback_valid()` and
`dmaengine_desc_callback_invoke()`.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Link: https://lore.kernel.org/r/20211025075428.2094-1-lars@metafoo.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
In some configurations, the BAM DMA controller is set up by a remote
processor and the local processor can simply start making use of it
without setting up the BAM. This is already supported using the
"qcom,controlled-remotely" property.
However, for some reason another possible configuration is that the
remote processor is responsible for powering up the BAM, but we are
still responsible for initializing it (e.g. resetting it etc).
This configuration is quite challenging to handle properly because
the power control is handled through separate channels
(e.g. device-specific SMSM interrupts / smem-states). Great care
must be taken to ensure the BAM registers are not accessed while
the BAM is powered off since this results in a bus stall.
Attempt to support this configuration with minimal device-specific
code in the bam_dma driver by tracking the number of requested
channels. Consumers of DMA channels are responsible to only request
DMA channels when the BAM was powered on by the remote processor,
and to release them before the BAM is powered off.
When the first channel is requested the BAM is initialized (reset)
and it is also put into reset when the last channel was released.
Signed-off-by: Stephan Gerhold <stephan@gerhold.net>
Reviewed-by: Bhupesh Sharma <bhupesh.sharma@linaro.org>
Link: https://lore.kernel.org/r/20211018102421.19848-3-stephan@gerhold.net
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Without CONFIG_PM_SLEEP, the runtime suspend/resume functions
are unused, producing a warning:
../drivers/dma/sa11x0-dma.c:1042:12: error: 'sa11x0_dma_resume' defined but not used
../drivers/dma/sa11x0-dma.c:1004:12: error: 'sa11x0_dma_suspend' defined but not used
Signed-off-by: Cai Huoqing <caihuoqing@baidu.com>
Link: https://lore.kernel.org/r/20211026020508.550-1-caihuoqing@baidu.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Qing Wang <wangqing@vivo.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Link: https://lore.kernel.org/r/1633663733-47199-7-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Qing Wang <wangqing@vivo.com>
Link: https://lore.kernel.org/r/1633663733-47199-3-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Qing Wang <wangqing@vivo.com>
Link: https://lore.kernel.org/r/1633663733-47199-4-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Qing Wang <wangqing@vivo.com>
Link: https://lore.kernel.org/r/1633663733-47199-5-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Qing Wang <wangqing@vivo.com>
Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
Link: https://lore.kernel.org/r/1633663733-47199-6-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The wrappers in include/linux/pci-dma-compat.h should go away.
pci_set_dma_mask()/pci_set_consistent_dma_mask() should be
replaced with dma_set_mask()/dma_set_coherent_mask(),
and use dma_set_mask_and_coherent() for both.
Signed-off-by: Wang Qing <wangqing@vivo.com>
Link: https://lore.kernel.org/r/1633663733-47199-2-git-send-email-wangqing@vivo.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Don't populate the read-only array ds_lut on the stack but instead it
static. Also makes the object code smaller by 163 bytes:
Before:
text data bss dec hex filename
23508 4796 0 28304 6e90 ./drivers/dma/sh/rz-dmac.o
After:
text data bss dec hex filename
23281 4860 0 28141 6ded ./drivers/dma/sh/rz-dmac.o
(gcc version 11.2.0)
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Link: https://lore.kernel.org/r/20210915112038.12407-1-colin.king@canonical.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
The issue happens in an error handling path. If
of_dma_controller_register() fails, the function simply prints error
messages and returns error code, without decrementing the reference
count of pdev->device incremented earlier by
dma_async_device_register(), which may result in refcount leaks.
Fix it by invoking dma_async_device_unregister() before returning the
error code.
Signed-off-by: Xin Xiong <xiongx18@fudan.edu.cn>
Signed-off-by: Xiyu Yang <xiyuyang19@fudan.edu.cn>
Signed-off-by: Xin Tan <tanxin.ctf@gmail.com>
Link: https://lore.kernel.org/r/20210911070533.3114-1-xiongx18@fudan.edu.cn
Signed-off-by: Vinod Koul <vkoul@kernel.org>