linux-stable/drivers/xen
Michael Kelley 7296f2301a swiotlb: reduce swiotlb pool lookups
With CONFIG_SWIOTLB_DYNAMIC enabled, each round-trip map/unmap pair
in the swiotlb results in 6 calls to swiotlb_find_pool(). In multiple
places, the pool is found and used in one function, and then must
be found again in the next function that is called because only the
tlb_addr is passed as an argument. These are the six call sites:

dma_direct_map_page:
 1. swiotlb_map -> swiotlb_tbl_map_single -> swiotlb_bounce

dma_direct_unmap_page:
 2. dma_direct_sync_single_for_cpu -> is_swiotlb_buffer
 3. dma_direct_sync_single_for_cpu -> swiotlb_sync_single_for_cpu ->
	swiotlb_bounce
 4. is_swiotlb_buffer
 5. swiotlb_tbl_unmap_single -> swiotlb_del_transient
 6. swiotlb_tbl_unmap_single -> swiotlb_release_slots

Reduce the number of calls by finding the pool at a higher level, and
passing it as an argument instead of searching again. A key change is
for is_swiotlb_buffer() to return a pool pointer instead of a boolean,
and then pass this pool pointer to subsequent swiotlb functions.

There are 9 occurrences of is_swiotlb_buffer() used to test if a buffer
is a swiotlb buffer before calling a swiotlb function. To reduce code
duplication in getting the pool pointer and passing it as an argument,
introduce inline wrappers for this pattern. The generated code is
essentially unchanged.

Since is_swiotlb_buffer() no longer returns a boolean, rename some
functions to reflect the change:

 * swiotlb_find_pool() becomes __swiotlb_find_pool()
 * is_swiotlb_buffer() becomes swiotlb_find_pool()
 * is_xen_swiotlb_buffer() becomes xen_swiotlb_find_pool()

With these changes, a round-trip map/unmap pair requires only 2 pool
lookups (listed using the new names and wrappers):

dma_direct_unmap_page:
 1. dma_direct_sync_single_for_cpu -> swiotlb_find_pool
 2. swiotlb_tbl_unmap_single -> swiotlb_find_pool

These changes come from noticing the inefficiencies in a code review,
not from performance measurements. With CONFIG_SWIOTLB_DYNAMIC,
__swiotlb_find_pool() is not trivial, and it uses an RCU read lock,
so avoiding the redundant calls helps performance in a hot path.
When CONFIG_SWIOTLB_DYNAMIC is *not* set, the code size reduction
is minimal and the perf benefits are likely negligible, but no
harm is done.

No functional change is intended.

Signed-off-by: Michael Kelley <mhklinux@outlook.com>
Reviewed-by: Petr Tesarik <petr@tesarici.cz>
Signed-off-by: Christoph Hellwig <hch@lst.de>
2024-07-10 07:59:03 +02:00
..
events xen: branch for v6.9-rc1 2024-03-19 08:48:09 -07:00
xen-pciback
xenbus drivers/xen: Improve the late XenStore init protocol 2024-05-23 12:41:18 +02:00
xenfs
acpi.c
arm-device.c
balloon.c x86/xen: attempt to inflate the memory balloon on PVH 2024-03-13 17:48:26 +01:00
biomerge.c
cpu_hotplug.c
dbgp.c
efi.c
evtchn.c xen/evtchn: avoid WARN() when unbinding an event channel 2024-03-17 18:57:11 +01:00
features.c
gntalloc.c xen/gntalloc: Replace UAPI 1-element array 2024-02-13 09:06:48 +01:00
gntdev-common.h
gntdev-dmabuf.c xen/gntdev: Fix the abuse of underlying struct page in DMA-buf import 2024-01-10 07:32:40 +01:00
gntdev-dmabuf.h
gntdev.c
grant-dma-iommu.c xen/grant-dma-iommu: Convert to platform remove callback returning void 2024-03-13 17:09:26 +01:00
grant-dma-ops.c change alloc_pages name in dma_map_ops to avoid name conflicts 2024-04-25 20:55:53 -07:00
grant-table.c
Kconfig
Makefile
manage.c
mcelog.c
mem-reservation.c
pci.c
pcpu.c xen: pcpu: make xen_pcpu_subsys const 2024-02-13 09:03:31 +01:00
platform-pci.c
privcmd-buf.c
privcmd.c xen/privcmd: Use memdup_array_user() in alloc_ioreq() 2024-02-13 08:58:21 +01:00
privcmd.h
pvcalls-back.c net: change proto and proto_ops accept type 2024-05-13 18:19:09 -06:00
pvcalls-front.c
pvcalls-front.h
swiotlb-xen.c swiotlb: reduce swiotlb pool lookups 2024-07-10 07:59:03 +02:00
sys-hypervisor.c
time.c
unpopulated-alloc.c
xen-acpi-pad.c
xen-acpi-processor.c
xen-balloon.c xen: balloon: make balloon_subsys const 2024-02-13 09:03:34 +01:00
xen-front-pgdir-shbuf.c
xen-scsiback.c
xlate_mmu.c