linux-stable/kernel/dma
Michael Kelley 7296f2301a swiotlb: reduce swiotlb pool lookups
With CONFIG_SWIOTLB_DYNAMIC enabled, each round-trip map/unmap pair
in the swiotlb results in 6 calls to swiotlb_find_pool(). In multiple
places, the pool is found and used in one function, and then must
be found again in the next function that is called because only the
tlb_addr is passed as an argument. These are the six call sites:

dma_direct_map_page:
 1. swiotlb_map -> swiotlb_tbl_map_single -> swiotlb_bounce

dma_direct_unmap_page:
 2. dma_direct_sync_single_for_cpu -> is_swiotlb_buffer
 3. dma_direct_sync_single_for_cpu -> swiotlb_sync_single_for_cpu ->
	swiotlb_bounce
 4. is_swiotlb_buffer
 5. swiotlb_tbl_unmap_single -> swiotlb_del_transient
 6. swiotlb_tbl_unmap_single -> swiotlb_release_slots

Reduce the number of calls by finding the pool at a higher level, and
passing it as an argument instead of searching again. A key change is
for is_swiotlb_buffer() to return a pool pointer instead of a boolean,
and then pass this pool pointer to subsequent swiotlb functions.

There are 9 occurrences of is_swiotlb_buffer() used to test if a buffer
is a swiotlb buffer before calling a swiotlb function. To reduce code
duplication in getting the pool pointer and passing it as an argument,
introduce inline wrappers for this pattern. The generated code is
essentially unchanged.

Since is_swiotlb_buffer() no longer returns a boolean, rename some
functions to reflect the change:

 * swiotlb_find_pool() becomes __swiotlb_find_pool()
 * is_swiotlb_buffer() becomes swiotlb_find_pool()
 * is_xen_swiotlb_buffer() becomes xen_swiotlb_find_pool()

With these changes, a round-trip map/unmap pair requires only 2 pool
lookups (listed using the new names and wrappers):

dma_direct_unmap_page:
 1. dma_direct_sync_single_for_cpu -> swiotlb_find_pool
 2. swiotlb_tbl_unmap_single -> swiotlb_find_pool

These changes come from noticing the inefficiencies in a code review,
not from performance measurements. With CONFIG_SWIOTLB_DYNAMIC,
__swiotlb_find_pool() is not trivial, and it uses an RCU read lock,
so avoiding the redundant calls helps performance in a hot path.
When CONFIG_SWIOTLB_DYNAMIC is *not* set, the code size reduction
is minimal and the perf benefits are likely negligible, but no
harm is done.

No functional change is intended.

Signed-off-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Petr Tesarik <petr@tesarici.cz>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2024-07-10 07:59:03 +02:00
..
coherent.c dma-mapping: clear dev->dma_mem to NULL after freeing it 2023-12-15 12:32:45 +01:00
contiguous.c mm/cma: drop CONFIG_CMA_DEBUG 2024-02-22 10:24:53 -08:00
debug.c dma-mapping fixes for Linux 6.8 2024-01-18 16:49:34 -08:00
debug.h dma-debug: teach add_dma_entry() about DMA_ATTR_SKIP_CPU_SYNC 2021-10-18 12:46:45 +02:00
direct.c swiotlb: reduce swiotlb pool lookups 2024-07-10 07:59:03 +02:00
direct.h swiotlb: reduce swiotlb pool lookups 2024-07-10 07:59:03 +02:00
dummy.c dma-mapping: return error code from dma_dummy_map_sg() 2021-08-09 17:13:06 +02:00
Kconfig dma: compile-out DMA sync op calls when not used 2024-05-07 13:29:53 +02:00
Makefile dma-mapping: remove CONFIG_DMA_REMAP 2022-03-03 14:00:57 +03:00
map_benchmark.c dma-mapping: benchmark: Don't starve others when doing the test 2024-07-09 07:48:32 +02:00
mapping.c dma-mapping updates for Linux 6.10 2024-05-20 10:23:39 -07:00
ops_helpers.c dma-mapping: handle vmalloc addresses in dma_common_{mmap,get_sgtable} 2021-07-16 11:30:26 +02:00
pool.c mm, treewide: rename MAX_ORDER to MAX_PAGE_ORDER 2024-01-08 15:27:15 -08:00
remap.c dma-remap: use kvmalloc_array/kvfree for larger dma memory remap 2023-06-07 15:06:28 +02:00
swiotlb.c swiotlb: reduce swiotlb pool lookups 2024-07-10 07:59:03 +02:00