License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 15:07:57 +01:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2008-10-15 22:01:03 -07:00
|
|
|
#ifndef __LINUX_SWIOTLB_H
|
|
|
|
#define __LINUX_SWIOTLB_H
|
|
|
|
|
2021-06-19 11:40:35 +08:00
|
|
|
#include <linux/device.h>
|
2015-07-01 14:17:58 +02:00
|
|
|
#include <linux/dma-direction.h>
|
|
|
|
#include <linux/init.h>
|
2008-10-15 22:01:03 -07:00
|
|
|
#include <linux/types.h>
|
2020-11-02 12:43:27 +11:00
|
|
|
#include <linux/limits.h>
|
2021-03-18 17:14:22 +01:00
|
|
|
#include <linux/spinlock.h>
|
swiotlb: allocate a new memory pool when existing pools are full
When swiotlb_find_slots() cannot find suitable slots, schedule the
allocation of a new memory pool. It is not possible to allocate the pool
immediately, because this code may run in interrupt context, which is not
suitable for large memory allocations. This means that the memory pool will
be available too late for the currently requested mapping, but the stress
on the software IO TLB allocator is likely to continue, and subsequent
allocations will benefit from the additional pool eventually.
Keep all memory pools for an allocator in an RCU list to avoid locking on
the read side. For modifications, add a new spinlock to struct io_tlb_mem.
The spinlock also protects updates to the total number of slabs (nslabs in
struct io_tlb_mem), but not reads of the value. Readers may therefore
encounter a stale value, but this is not an issue:
- swiotlb_tbl_map_single() and is_swiotlb_active() only check for non-zero
value. This is ensured by the existence of the default memory pool,
allocated at boot.
- The exact value is used only for non-critical purposes (debugfs, kernel
messages).
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01 08:24:03 +02:00
|
|
|
#include <linux/workqueue.h>
|
2008-10-15 22:01:03 -07:00
|
|
|
|
|
|
|
struct device;
|
2015-07-01 14:17:58 +02:00
|
|
|
struct page;
|
2008-10-15 22:01:03 -07:00
|
|
|
struct scatterlist;
|
|
|
|
|
2022-03-29 17:27:33 +02:00
|
|
|
#define SWIOTLB_VERBOSE (1 << 0) /* verbose initialization */
|
|
|
|
#define SWIOTLB_FORCE (1 << 1) /* force bounce buffering */
|
2022-02-28 13:36:57 +02:00
|
|
|
#define SWIOTLB_ANY (1 << 2) /* allow any memory for the buffer */
|
2016-12-16 14:28:41 +01:00
|
|
|
|
2008-12-16 12:17:27 -08:00
|
|
|
/*
|
|
|
|
* Maximum allowable number of contiguous slabs to map,
|
|
|
|
* must be a power of 2. What is the appropriate value ?
|
|
|
|
* The complexity of {map,unmap}_single is linearly dependent on this value.
|
|
|
|
*/
|
|
|
|
#define IO_TLB_SEGSIZE 128
|
|
|
|
|
|
|
|
/*
|
|
|
|
* log of the size of each IO TLB slab. The number of slabs is command line
|
|
|
|
* controllable.
|
|
|
|
*/
|
|
|
|
#define IO_TLB_SHIFT 11
|
2021-02-05 11:18:40 +01:00
|
|
|
#define IO_TLB_SIZE (1 << IO_TLB_SHIFT)
|
2008-12-16 12:17:27 -08:00
|
|
|
|
2020-12-10 01:25:15 +00:00
|
|
|
/* default to 64MB */
|
|
|
|
#define IO_TLB_DEFAULT_SIZE (64UL<<20)
|
|
|
|
|
2013-04-15 22:23:45 -07:00
|
|
|
unsigned long swiotlb_size_or_default(void);
|
2022-03-14 08:02:57 +01:00
|
|
|
void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
|
|
|
|
int (*remap)(void *tlb, unsigned long nslabs));
|
|
|
|
int swiotlb_init_late(size_t size, gfp_t gfp_mask,
|
|
|
|
int (*remap)(void *tlb, unsigned long nslabs));
|
2017-07-17 16:10:21 -05:00
|
|
|
extern void __init swiotlb_update_mem_attributes(void);
|
2008-10-15 22:01:03 -07:00
|
|
|
|
2020-10-23 08:33:09 +02:00
|
|
|
phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
|
swiotlb: remove alloc_size argument to swiotlb_tbl_map_single()
Currently swiotlb_tbl_map_single() takes alloc_align_mask and
alloc_size arguments to specify an swiotlb allocation that is larger
than mapping_size. This larger allocation is used solely by
iommu_dma_map_single() to handle untrusted devices that should not have
DMA visibility to memory pages that are partially used for unrelated
kernel data.
Having two arguments to specify the allocation is redundant. While
alloc_align_mask naturally specifies the alignment of the starting
address of the allocation, it can also implicitly specify the size
by rounding up the mapping_size to that alignment.
Additionally, the current approach has an edge case bug.
iommu_dma_map_page() already does the rounding up to compute the
alloc_size argument. But swiotlb_tbl_map_single() then calculates the
alignment offset based on the DMA min_align_mask, and adds that offset to
alloc_size. If the offset is non-zero, the addition may result in a value
that is larger than the max the swiotlb can allocate. If the rounding up
is done _after_ the alignment offset is added to the mapping_size (and
the original mapping_size conforms to the value returned by
swiotlb_max_mapping_size), then the max that the swiotlb can allocate
will not be exceeded.
In view of these issues, simplify the swiotlb_tbl_map_single() interface
by removing the alloc_size argument. Most call sites pass the same value
for mapping_size and alloc_size, and they pass alloc_align_mask as zero.
Just remove the redundant argument from these callers, as they will see
no functional change. For iommu_dma_map_page() also remove the alloc_size
argument, and have swiotlb_tbl_map_single() compute the alloc_size by
rounding up mapping_size after adding the offset based on min_align_mask.
This has the side effect of fixing the edge case bug but with no other
functional change.
Also add a sanity test on the alloc_align_mask. While IOMMU code
currently ensures the granule is not larger than PAGE_SIZE, if that
guarantee were to be removed in the future, the downstream effect on the
swiotlb might go unnoticed until strange allocation failures occurred.
Tested on an ARM64 system with 16K page size and some kernel test-only
hackery to allow modifying the DMA min_align_mask and the granule size
that becomes the alloc_align_mask. Tested these combinations with a
variety of original memory addresses and sizes, including those that
reproduce the edge case bug:
* 4K granule and 0 min_align_mask
* 4K granule and 0xFFF min_align_mask (4K - 1)
* 16K granule and 0xFFF min_align_mask
* 64K granule and 0xFFF min_align_mask
* 64K granule and 0x3FFF min_align_mask (16K - 1)
With the changes, all combinations pass.
Signed-off-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Petr Tesarik <petr@tesarici.cz>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2024-04-07 21:11:41 -07:00
|
|
|
size_t mapping_size,
|
2021-09-29 11:32:59 +09:00
|
|
|
unsigned int alloc_aligned_mask, enum dma_data_direction dir,
|
|
|
|
unsigned long attrs);
|
2010-05-28 11:37:10 -04:00
|
|
|
|
2012-10-15 10:19:44 -07:00
|
|
|
extern void swiotlb_tbl_unmap_single(struct device *hwdev,
|
|
|
|
phys_addr_t tlb_addr,
|
2019-09-06 14:14:48 +08:00
|
|
|
size_t mapping_size,
|
|
|
|
enum dma_data_direction dir,
|
2016-11-02 07:13:02 -04:00
|
|
|
unsigned long attrs);
|
2010-05-28 11:37:10 -04:00
|
|
|
|
2021-03-01 08:44:26 +01:00
|
|
|
void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
|
|
|
|
size_t size, enum dma_data_direction dir);
|
|
|
|
void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
|
|
|
|
size_t size, enum dma_data_direction dir);
|
2020-02-03 14:44:38 +01:00
|
|
|
dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys,
|
|
|
|
size_t size, enum dma_data_direction dir, unsigned long attrs);
|
|
|
|
|
2009-11-10 19:46:18 +09:00
|
|
|
#ifdef CONFIG_SWIOTLB
|
2021-03-18 17:14:22 +01:00
|
|
|
|
|
|
|
/**
|
2023-08-01 08:23:59 +02:00
|
|
|
* struct io_tlb_pool - IO TLB memory pool descriptor
|
2021-03-18 17:14:22 +01:00
|
|
|
* @start: The start address of the swiotlb memory pool. Used to do a quick
|
|
|
|
* range check to see if the memory was in fact allocated by this
|
|
|
|
* API.
|
|
|
|
* @end: The end address of the swiotlb memory pool. Used to do a quick
|
|
|
|
* range check to see if the memory was in fact allocated by this
|
|
|
|
* API.
|
2021-12-13 02:14:02 -05:00
|
|
|
* @vaddr: The vaddr of the swiotlb memory pool. The swiotlb memory pool
|
|
|
|
* may be remapped in the memory encrypted case and store virtual
|
|
|
|
* address for bounce buffer operation.
|
2023-08-01 08:23:59 +02:00
|
|
|
* @nslabs: The number of IO TLB slots between @start and @end. For the
|
|
|
|
* default swiotlb, this can be adjusted with a boot parameter,
|
|
|
|
* see setup_io_tlb_npages().
|
|
|
|
* @late_alloc: %true if allocated using the page allocator.
|
|
|
|
* @nareas: Number of areas in the pool.
|
|
|
|
* @area_nslabs: Number of slots in each area.
|
|
|
|
* @areas: Array of memory area descriptors.
|
|
|
|
* @slots: Array of slot descriptors.
|
swiotlb: if swiotlb is full, fall back to a transient memory pool
Try to allocate a transient memory pool if no suitable slots can be found
and the respective SWIOTLB is allowed to grow. The transient pool is just
enough big for this one bounce buffer. It is inserted into a per-device
list of transient memory pools, and it is freed again when the bounce
buffer is unmapped.
Transient memory pools are kept in an RCU list. A memory barrier is
required after adding a new entry, because any address within a transient
buffer must be immediately recognized as belonging to the SWIOTLB, even if
it is passed to another CPU.
Deletion does not require any synchronization beyond RCU ordering
guarantees. After a buffer is unmapped, its physical addresses may no
longer be passed to the DMA API, so the memory range of the corresponding
stale entry in the RCU list never matches. If the memory range gets
allocated again, then it happens only after a RCU quiescent state.
Since bounce buffers can now be allocated from different pools, add a
parameter to swiotlb_alloc_pool() to let the caller know which memory pool
is used. Add swiotlb_find_pool() to find the memory pool corresponding to
an address. This function is now also used by is_swiotlb_buffer(), because
a simple boundary check is no longer sufficient.
The logic in swiotlb_alloc_tlb() is taken from __dma_direct_alloc_pages(),
simplified and enhanced to use coherent memory pools if needed.
Note that this is not the most efficient way to provide a bounce buffer,
but when a DMA buffer can't be mapped, something may (and will) actually
break. At that point it is better to make an allocation, even if it may be
an expensive operation.
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01 08:24:01 +02:00
|
|
|
* @node: Member of the IO TLB memory pool list.
|
|
|
|
* @rcu: RCU head for swiotlb_dyn_free().
|
|
|
|
* @transient: %true if transient memory pool.
|
2023-08-01 08:23:59 +02:00
|
|
|
*/
|
|
|
|
struct io_tlb_pool {
|
|
|
|
phys_addr_t start;
|
|
|
|
phys_addr_t end;
|
|
|
|
void *vaddr;
|
|
|
|
unsigned long nslabs;
|
|
|
|
bool late_alloc;
|
|
|
|
unsigned int nareas;
|
|
|
|
unsigned int area_nslabs;
|
|
|
|
struct io_tlb_area *areas;
|
|
|
|
struct io_tlb_slot *slots;
|
swiotlb: if swiotlb is full, fall back to a transient memory pool
Try to allocate a transient memory pool if no suitable slots can be found
and the respective SWIOTLB is allowed to grow. The transient pool is just
enough big for this one bounce buffer. It is inserted into a per-device
list of transient memory pools, and it is freed again when the bounce
buffer is unmapped.
Transient memory pools are kept in an RCU list. A memory barrier is
required after adding a new entry, because any address within a transient
buffer must be immediately recognized as belonging to the SWIOTLB, even if
it is passed to another CPU.
Deletion does not require any synchronization beyond RCU ordering
guarantees. After a buffer is unmapped, its physical addresses may no
longer be passed to the DMA API, so the memory range of the corresponding
stale entry in the RCU list never matches. If the memory range gets
allocated again, then it happens only after a RCU quiescent state.
Since bounce buffers can now be allocated from different pools, add a
parameter to swiotlb_alloc_pool() to let the caller know which memory pool
is used. Add swiotlb_find_pool() to find the memory pool corresponding to
an address. This function is now also used by is_swiotlb_buffer(), because
a simple boundary check is no longer sufficient.
The logic in swiotlb_alloc_tlb() is taken from __dma_direct_alloc_pages(),
simplified and enhanced to use coherent memory pools if needed.
Note that this is not the most efficient way to provide a bounce buffer,
but when a DMA buffer can't be mapped, something may (and will) actually
break. At that point it is better to make an allocation, even if it may be
an expensive operation.
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01 08:24:01 +02:00
|
|
|
#ifdef CONFIG_SWIOTLB_DYNAMIC
|
|
|
|
struct list_head node;
|
|
|
|
struct rcu_head rcu;
|
|
|
|
bool transient;
|
|
|
|
#endif
|
2023-08-01 08:23:59 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
/**
|
|
|
|
* struct io_tlb_mem - Software IO TLB allocator
|
|
|
|
* @defpool: Default (initial) IO TLB memory pool descriptor.
|
swiotlb: allocate a new memory pool when existing pools are full
When swiotlb_find_slots() cannot find suitable slots, schedule the
allocation of a new memory pool. It is not possible to allocate the pool
immediately, because this code may run in interrupt context, which is not
suitable for large memory allocations. This means that the memory pool will
be available too late for the currently requested mapping, but the stress
on the software IO TLB allocator is likely to continue, and subsequent
allocations will benefit from the additional pool eventually.
Keep all memory pools for an allocator in an RCU list to avoid locking on
the read side. For modifications, add a new spinlock to struct io_tlb_mem.
The spinlock also protects updates to the total number of slabs (nslabs in
struct io_tlb_mem), but not reads of the value. Readers may therefore
encounter a stale value, but this is not an issue:
- swiotlb_tbl_map_single() and is_swiotlb_active() only check for non-zero
value. This is ensured by the existence of the default memory pool,
allocated at boot.
- The exact value is used only for non-critical purposes (debugfs, kernel
messages).
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01 08:24:03 +02:00
|
|
|
* @pool: IO TLB memory pool descriptor (if not dynamic).
|
2023-08-01 08:23:59 +02:00
|
|
|
* @nslabs: Total number of IO TLB slabs in all pools.
|
2021-03-18 17:14:22 +01:00
|
|
|
* @debugfs: The dentry to debugfs.
|
2021-06-24 23:55:20 +08:00
|
|
|
* @force_bounce: %true if swiotlb bouncing is forced
|
2021-06-19 11:40:40 +08:00
|
|
|
* @for_alloc: %true if the pool is used for memory allocation
|
2023-08-01 08:24:00 +02:00
|
|
|
* @can_grow: %true if more pools can be allocated dynamically.
|
2023-08-01 08:24:02 +02:00
|
|
|
* @phys_limit: Maximum allowed physical address.
|
swiotlb: allocate a new memory pool when existing pools are full
When swiotlb_find_slots() cannot find suitable slots, schedule the
allocation of a new memory pool. It is not possible to allocate the pool
immediately, because this code may run in interrupt context, which is not
suitable for large memory allocations. This means that the memory pool will
be available too late for the currently requested mapping, but the stress
on the software IO TLB allocator is likely to continue, and subsequent
allocations will benefit from the additional pool eventually.
Keep all memory pools for an allocator in an RCU list to avoid locking on
the read side. For modifications, add a new spinlock to struct io_tlb_mem.
The spinlock also protects updates to the total number of slabs (nslabs in
struct io_tlb_mem), but not reads of the value. Readers may therefore
encounter a stale value, but this is not an issue:
- swiotlb_tbl_map_single() and is_swiotlb_active() only check for non-zero
value. This is ensured by the existence of the default memory pool,
allocated at boot.
- The exact value is used only for non-critical purposes (debugfs, kernel
messages).
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01 08:24:03 +02:00
|
|
|
* @lock: Lock to synchronize changes to the list.
|
|
|
|
* @pools: List of IO TLB memory pool descriptors (if dynamic).
|
|
|
|
* @dyn_alloc: Dynamic IO TLB pool allocation work.
|
2023-04-13 10:57:37 -07:00
|
|
|
* @total_used: The total number of slots in the pool that are currently used
|
|
|
|
* across all areas. Used only for calculating used_hiwater in
|
|
|
|
* debugfs.
|
|
|
|
* @used_hiwater: The high water mark for total_used. Used only for reporting
|
|
|
|
* in debugfs.
|
2024-01-09 15:04:56 +08:00
|
|
|
* @transient_nslabs: The total number of slots in all transient pools that
|
|
|
|
* are currently used across all areas.
|
2021-03-18 17:14:22 +01:00
|
|
|
*/
|
|
|
|
struct io_tlb_mem {
|
2023-08-01 08:23:59 +02:00
|
|
|
struct io_tlb_pool defpool;
|
2021-03-18 17:14:22 +01:00
|
|
|
unsigned long nslabs;
|
|
|
|
struct dentry *debugfs;
|
2021-06-24 23:55:20 +08:00
|
|
|
bool force_bounce;
|
2021-06-19 11:40:40 +08:00
|
|
|
bool for_alloc;
|
2023-08-01 08:24:00 +02:00
|
|
|
#ifdef CONFIG_SWIOTLB_DYNAMIC
|
|
|
|
bool can_grow;
|
2023-08-01 08:24:02 +02:00
|
|
|
u64 phys_limit;
|
swiotlb: allocate a new memory pool when existing pools are full
When swiotlb_find_slots() cannot find suitable slots, schedule the
allocation of a new memory pool. It is not possible to allocate the pool
immediately, because this code may run in interrupt context, which is not
suitable for large memory allocations. This means that the memory pool will
be available too late for the currently requested mapping, but the stress
on the software IO TLB allocator is likely to continue, and subsequent
allocations will benefit from the additional pool eventually.
Keep all memory pools for an allocator in an RCU list to avoid locking on
the read side. For modifications, add a new spinlock to struct io_tlb_mem.
The spinlock also protects updates to the total number of slabs (nslabs in
struct io_tlb_mem), but not reads of the value. Readers may therefore
encounter a stale value, but this is not an issue:
- swiotlb_tbl_map_single() and is_swiotlb_active() only check for non-zero
value. This is ensured by the existence of the default memory pool,
allocated at boot.
- The exact value is used only for non-critical purposes (debugfs, kernel
messages).
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01 08:24:03 +02:00
|
|
|
spinlock_t lock;
|
|
|
|
struct list_head pools;
|
|
|
|
struct work_struct dyn_alloc;
|
2023-08-01 08:24:00 +02:00
|
|
|
#endif
|
2023-04-20 11:58:58 +02:00
|
|
|
#ifdef CONFIG_DEBUG_FS
|
2023-04-13 10:57:37 -07:00
|
|
|
atomic_long_t total_used;
|
|
|
|
atomic_long_t used_hiwater;
|
2024-01-09 15:04:56 +08:00
|
|
|
atomic_long_t transient_nslabs;
|
2023-04-20 11:58:58 +02:00
|
|
|
#endif
|
2021-03-18 17:14:22 +01:00
|
|
|
};
|
2018-12-03 11:43:54 +01:00
|
|
|
|
swiotlb: if swiotlb is full, fall back to a transient memory pool
Try to allocate a transient memory pool if no suitable slots can be found
and the respective SWIOTLB is allowed to grow. The transient pool is just
enough big for this one bounce buffer. It is inserted into a per-device
list of transient memory pools, and it is freed again when the bounce
buffer is unmapped.
Transient memory pools are kept in an RCU list. A memory barrier is
required after adding a new entry, because any address within a transient
buffer must be immediately recognized as belonging to the SWIOTLB, even if
it is passed to another CPU.
Deletion does not require any synchronization beyond RCU ordering
guarantees. After a buffer is unmapped, its physical addresses may no
longer be passed to the DMA API, so the memory range of the corresponding
stale entry in the RCU list never matches. If the memory range gets
allocated again, then it happens only after a RCU quiescent state.
Since bounce buffers can now be allocated from different pools, add a
parameter to swiotlb_alloc_pool() to let the caller know which memory pool
is used. Add swiotlb_find_pool() to find the memory pool corresponding to
an address. This function is now also used by is_swiotlb_buffer(), because
a simple boundary check is no longer sufficient.
The logic in swiotlb_alloc_tlb() is taken from __dma_direct_alloc_pages(),
simplified and enhanced to use coherent memory pools if needed.
Note that this is not the most efficient way to provide a bounce buffer,
but when a DMA buffer can't be mapped, something may (and will) actually
break. At that point it is better to make an allocation, even if it may be
an expensive operation.
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01 08:24:01 +02:00
|
|
|
#ifdef CONFIG_SWIOTLB_DYNAMIC
|
|
|
|
|
|
|
|
struct io_tlb_pool *swiotlb_find_pool(struct device *dev, phys_addr_t paddr);
|
|
|
|
|
|
|
|
#else
|
|
|
|
|
|
|
|
static inline struct io_tlb_pool *swiotlb_find_pool(struct device *dev,
|
|
|
|
phys_addr_t paddr)
|
|
|
|
{
|
|
|
|
return &dev->dma_io_tlb_mem->defpool;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif
|
|
|
|
|
2023-08-01 08:23:58 +02:00
|
|
|
/**
|
|
|
|
* is_swiotlb_buffer() - check if a physical address belongs to a swiotlb
|
|
|
|
* @dev: Device which has mapped the buffer.
|
|
|
|
* @paddr: Physical address within the DMA buffer.
|
|
|
|
*
|
|
|
|
* Check if @paddr points into a bounce buffer.
|
|
|
|
*
|
|
|
|
* Return:
|
|
|
|
* * %true if @paddr points into a bounce buffer
|
|
|
|
* * %false otherwise
|
|
|
|
*/
|
2021-06-19 11:40:35 +08:00
|
|
|
static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
|
2018-12-03 11:43:54 +01:00
|
|
|
{
|
2021-06-19 11:40:35 +08:00
|
|
|
struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
|
2021-03-18 17:14:22 +01:00
|
|
|
|
swiotlb: if swiotlb is full, fall back to a transient memory pool
Try to allocate a transient memory pool if no suitable slots can be found
and the respective SWIOTLB is allowed to grow. The transient pool is just
enough big for this one bounce buffer. It is inserted into a per-device
list of transient memory pools, and it is freed again when the bounce
buffer is unmapped.
Transient memory pools are kept in an RCU list. A memory barrier is
required after adding a new entry, because any address within a transient
buffer must be immediately recognized as belonging to the SWIOTLB, even if
it is passed to another CPU.
Deletion does not require any synchronization beyond RCU ordering
guarantees. After a buffer is unmapped, its physical addresses may no
longer be passed to the DMA API, so the memory range of the corresponding
stale entry in the RCU list never matches. If the memory range gets
allocated again, then it happens only after a RCU quiescent state.
Since bounce buffers can now be allocated from different pools, add a
parameter to swiotlb_alloc_pool() to let the caller know which memory pool
is used. Add swiotlb_find_pool() to find the memory pool corresponding to
an address. This function is now also used by is_swiotlb_buffer(), because
a simple boundary check is no longer sufficient.
The logic in swiotlb_alloc_tlb() is taken from __dma_direct_alloc_pages(),
simplified and enhanced to use coherent memory pools if needed.
Note that this is not the most efficient way to provide a bounce buffer,
but when a DMA buffer can't be mapped, something may (and will) actually
break. At that point it is better to make an allocation, even if it may be
an expensive operation.
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01 08:24:01 +02:00
|
|
|
if (!mem)
|
|
|
|
return false;
|
|
|
|
|
swiotlb: fix the check whether a device has used software IO TLB
When CONFIG_SWIOTLB_DYNAMIC=y, devices which do not use the software IO TLB
can avoid swiotlb lookup. A flag is added by commit 1395706a1490 ("swiotlb:
search the software IO TLB only if the device makes use of it"), the flag
is correctly set, but it is then never checked. Add the actual check here.
Note that this code is an alternative to the default pool check, not an
additional check, because:
1. swiotlb_find_pool() also searches the default pool;
2. if dma_uses_io_tlb is false, the default swiotlb pool is not used.
Tested in a KVM guest against a QEMU RAM-backed SATA disk over virtio and
*not* using software IO TLB, this patch increases IOPS by approx 2% for
4-way parallel I/O.
The write memory barrier in swiotlb_dyn_alloc() is not needed, because a
newly allocated pool must always be observed by swiotlb_find_slots() before
an address from that pool is passed to is_swiotlb_buffer().
Correctness was verified using the following litmus test:
C swiotlb-new-pool
(*
* Result: Never
*
* Check that a newly allocated pool is always visible when the
* corresponding swiotlb buffer is visible.
*)
{
mem_pools = default;
}
P0(int **mem_pools, int *pool)
{
/* add_mem_pool() */
WRITE_ONCE(*pool, 999);
rcu_assign_pointer(*mem_pools, pool);
}
P1(int **mem_pools, int *flag, int *buf)
{
/* swiotlb_find_slots() */
int *r0;
int r1;
rcu_read_lock();
r0 = READ_ONCE(*mem_pools);
r1 = READ_ONCE(*r0);
rcu_read_unlock();
if (r1) {
WRITE_ONCE(*flag, 1);
smp_mb();
}
/* device driver (presumed) */
WRITE_ONCE(*buf, r1);
}
P2(int **mem_pools, int *flag, int *buf)
{
/* device driver (presumed) */
int r0 = READ_ONCE(*buf);
/* is_swiotlb_buffer() */
int r1;
int *r2;
int r3;
smp_rmb();
r1 = READ_ONCE(*flag);
if (r1) {
/* swiotlb_find_pool() */
rcu_read_lock();
r2 = READ_ONCE(*mem_pools);
r3 = READ_ONCE(*r2);
rcu_read_unlock();
}
}
exists (2:r0<>0 /\ 2:r3=0) (* Not found. *)
Fixes: 1395706a1490 ("swiotlb: search the software IO TLB only if the device makes use of it")
Reported-by: Jonathan Corbet <corbet@lwn.net>
Closes: https://lore.kernel.org/linux-iommu/87a5uz3ob8.fsf@meer.lwn.net/
Signed-off-by: Petr Tesarik <petr@tesarici.cz>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-09-26 20:55:56 +02:00
|
|
|
#ifdef CONFIG_SWIOTLB_DYNAMIC
|
|
|
|
/*
|
|
|
|
* All SWIOTLB buffer addresses must have been returned by
|
|
|
|
* swiotlb_tbl_map_single() and passed to a device driver.
|
|
|
|
* If a SWIOTLB address is checked on another CPU, then it was
|
|
|
|
* presumably loaded by the device driver from an unspecified private
|
|
|
|
* data structure. Make sure that this load is ordered before reading
|
|
|
|
* dev->dma_uses_io_tlb here and mem->pools in swiotlb_find_pool().
|
|
|
|
*
|
|
|
|
* This barrier pairs with smp_mb() in swiotlb_find_slots().
|
|
|
|
*/
|
|
|
|
smp_rmb();
|
|
|
|
return READ_ONCE(dev->dma_uses_io_tlb) &&
|
|
|
|
swiotlb_find_pool(dev, paddr);
|
|
|
|
#else
|
swiotlb: if swiotlb is full, fall back to a transient memory pool
Try to allocate a transient memory pool if no suitable slots can be found
and the respective SWIOTLB is allowed to grow. The transient pool is just
enough big for this one bounce buffer. It is inserted into a per-device
list of transient memory pools, and it is freed again when the bounce
buffer is unmapped.
Transient memory pools are kept in an RCU list. A memory barrier is
required after adding a new entry, because any address within a transient
buffer must be immediately recognized as belonging to the SWIOTLB, even if
it is passed to another CPU.
Deletion does not require any synchronization beyond RCU ordering
guarantees. After a buffer is unmapped, its physical addresses may no
longer be passed to the DMA API, so the memory range of the corresponding
stale entry in the RCU list never matches. If the memory range gets
allocated again, then it happens only after a RCU quiescent state.
Since bounce buffers can now be allocated from different pools, add a
parameter to swiotlb_alloc_pool() to let the caller know which memory pool
is used. Add swiotlb_find_pool() to find the memory pool corresponding to
an address. This function is now also used by is_swiotlb_buffer(), because
a simple boundary check is no longer sufficient.
The logic in swiotlb_alloc_tlb() is taken from __dma_direct_alloc_pages(),
simplified and enhanced to use coherent memory pools if needed.
Note that this is not the most efficient way to provide a bounce buffer,
but when a DMA buffer can't be mapped, something may (and will) actually
break. At that point it is better to make an allocation, even if it may be
an expensive operation.
Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-08-01 08:24:01 +02:00
|
|
|
return paddr >= mem->defpool.start && paddr < mem->defpool.end;
|
swiotlb: fix the check whether a device has used software IO TLB
When CONFIG_SWIOTLB_DYNAMIC=y, devices which do not use the software IO TLB
can avoid swiotlb lookup. A flag is added by commit 1395706a1490 ("swiotlb:
search the software IO TLB only if the device makes use of it"), the flag
is correctly set, but it is then never checked. Add the actual check here.
Note that this code is an alternative to the default pool check, not an
additional check, because:
1. swiotlb_find_pool() also searches the default pool;
2. if dma_uses_io_tlb is false, the default swiotlb pool is not used.
Tested in a KVM guest against a QEMU RAM-backed SATA disk over virtio and
*not* using software IO TLB, this patch increases IOPS by approx 2% for
4-way parallel I/O.
The write memory barrier in swiotlb_dyn_alloc() is not needed, because a
newly allocated pool must always be observed by swiotlb_find_slots() before
an address from that pool is passed to is_swiotlb_buffer().
Correctness was verified using the following litmus test:
C swiotlb-new-pool
(*
* Result: Never
*
* Check that a newly allocated pool is always visible when the
* corresponding swiotlb buffer is visible.
*)
{
mem_pools = default;
}
P0(int **mem_pools, int *pool)
{
/* add_mem_pool() */
WRITE_ONCE(*pool, 999);
rcu_assign_pointer(*mem_pools, pool);
}
P1(int **mem_pools, int *flag, int *buf)
{
/* swiotlb_find_slots() */
int *r0;
int r1;
rcu_read_lock();
r0 = READ_ONCE(*mem_pools);
r1 = READ_ONCE(*r0);
rcu_read_unlock();
if (r1) {
WRITE_ONCE(*flag, 1);
smp_mb();
}
/* device driver (presumed) */
WRITE_ONCE(*buf, r1);
}
P2(int **mem_pools, int *flag, int *buf)
{
/* device driver (presumed) */
int r0 = READ_ONCE(*buf);
/* is_swiotlb_buffer() */
int r1;
int *r2;
int r3;
smp_rmb();
r1 = READ_ONCE(*flag);
if (r1) {
/* swiotlb_find_pool() */
rcu_read_lock();
r2 = READ_ONCE(*mem_pools);
r3 = READ_ONCE(*r2);
rcu_read_unlock();
}
}
exists (2:r0<>0 /\ 2:r3=0) (* Not found. *)
Fixes: 1395706a1490 ("swiotlb: search the software IO TLB only if the device makes use of it")
Reported-by: Jonathan Corbet <corbet@lwn.net>
Closes: https://lore.kernel.org/linux-iommu/87a5uz3ob8.fsf@meer.lwn.net/
Signed-off-by: Petr Tesarik <petr@tesarici.cz>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-09-26 20:55:56 +02:00
|
|
|
#endif
|
2018-12-03 11:43:54 +01:00
|
|
|
}
|
|
|
|
|
2021-06-24 23:55:20 +08:00
|
|
|
static inline bool is_swiotlb_force_bounce(struct device *dev)
|
|
|
|
{
|
|
|
|
struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
|
|
|
|
|
|
|
|
return mem && mem->force_bounce;
|
|
|
|
}
|
|
|
|
|
2022-03-29 17:27:33 +02:00
|
|
|
void swiotlb_init(bool addressing_limited, unsigned int flags);
|
2018-12-03 11:43:54 +01:00
|
|
|
void __init swiotlb_exit(void);
|
2023-08-01 08:23:57 +02:00
|
|
|
void swiotlb_dev_init(struct device *dev);
|
2019-02-07 12:59:13 +01:00
|
|
|
size_t swiotlb_max_mapping_size(struct device *dev);
|
2023-08-01 08:23:57 +02:00
|
|
|
bool is_swiotlb_allocated(void);
|
2021-06-19 11:40:36 +08:00
|
|
|
bool is_swiotlb_active(struct device *dev);
|
2021-03-18 17:14:23 +01:00
|
|
|
void __init swiotlb_adjust_size(unsigned long size);
|
2023-08-01 08:23:57 +02:00
|
|
|
phys_addr_t default_swiotlb_base(void);
|
|
|
|
phys_addr_t default_swiotlb_limit(void);
|
2009-11-10 19:46:18 +09:00
|
|
|
#else
|
2022-03-29 17:27:33 +02:00
|
|
|
static inline void swiotlb_init(bool addressing_limited, unsigned int flags)
|
|
|
|
{
|
|
|
|
}
|
2023-08-01 08:23:57 +02:00
|
|
|
|
|
|
|
static inline void swiotlb_dev_init(struct device *dev)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2021-06-19 11:40:35 +08:00
|
|
|
static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
|
2018-12-03 11:43:54 +01:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
2021-06-24 23:55:20 +08:00
|
|
|
static inline bool is_swiotlb_force_bounce(struct device *dev)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
2018-12-03 11:43:54 +01:00
|
|
|
static inline void swiotlb_exit(void)
|
|
|
|
{
|
|
|
|
}
|
2019-02-07 12:59:13 +01:00
|
|
|
static inline size_t swiotlb_max_mapping_size(struct device *dev)
|
|
|
|
{
|
|
|
|
return SIZE_MAX;
|
|
|
|
}
|
2019-02-07 12:59:14 +01:00
|
|
|
|
2023-08-01 08:23:57 +02:00
|
|
|
static inline bool is_swiotlb_allocated(void)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2021-06-19 11:40:36 +08:00
|
|
|
static inline bool is_swiotlb_active(struct device *dev)
|
2019-02-07 12:59:14 +01:00
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
2020-12-10 01:25:15 +00:00
|
|
|
|
2021-03-18 17:14:23 +01:00
|
|
|
static inline void swiotlb_adjust_size(unsigned long size)
|
2020-12-10 01:25:15 +00:00
|
|
|
{
|
|
|
|
}
|
2023-08-01 08:23:57 +02:00
|
|
|
|
|
|
|
static inline phys_addr_t default_swiotlb_base(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline phys_addr_t default_swiotlb_limit(void)
|
|
|
|
{
|
|
|
|
return 0;
|
|
|
|
}
|
2018-12-03 11:43:54 +01:00
|
|
|
#endif /* CONFIG_SWIOTLB */
|
2009-11-10 19:46:18 +09:00
|
|
|
|
2009-11-10 19:46:19 +09:00
|
|
|
extern void swiotlb_print_info(void);
|
2014-06-04 16:06:50 -07:00
|
|
|
|
2021-06-19 11:40:40 +08:00
|
|
|
#ifdef CONFIG_DMA_RESTRICTED_POOL
|
|
|
|
struct page *swiotlb_alloc(struct device *dev, size_t size);
|
|
|
|
bool swiotlb_free(struct device *dev, struct page *page, size_t size);
|
|
|
|
|
|
|
|
static inline bool is_swiotlb_for_alloc(struct device *dev)
|
|
|
|
{
|
|
|
|
return dev->dma_io_tlb_mem->for_alloc;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
static inline struct page *swiotlb_alloc(struct device *dev, size_t size)
|
|
|
|
{
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
static inline bool swiotlb_free(struct device *dev, struct page *page,
|
|
|
|
size_t size)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
static inline bool is_swiotlb_for_alloc(struct device *dev)
|
|
|
|
{
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_DMA_RESTRICTED_POOL */
|
|
|
|
|
2008-10-15 22:01:03 -07:00
|
|
|
#endif /* __LINUX_SWIOTLB_H */
|