1549 Commits

Author SHA1 Message Date
Ingo Molnar
1fb48f8e54 Linux 4.6-rc6
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJXJoi6AAoJEHm+PkMAQRiGYKIIAIcocIV48DpGAHXFuSZbzw5D
 rp9EbE5TormtddPz1J1zcqu9tl5H8tfxS+LvHqRaDXqQkbb0BWKttmEpKTm9mrH8
 kfGNW8uwrEgTMMsar54BypAMMhHz4ITsj3VQX5QLSC5j6wixMcOmQ+IqH0Bwt3wr
 Y5JXDtZRysI1GoMkSU7/qsQBjC7aaBa5VzVUiGvhV8DdvPVFQf73P89G1vzMKqb5
 HRWbH4ieu6/mclLvW9N2QKGMHQntlB+9m2kG9nVWWbBSDxpAotwqQZFh3D52MBUy
 6DH/PNgkVyDhX4vfjua0NrmXdwTfKxLWGxe4dZ8Z+JZP5c6pqWlClIPBCkjHj50=
 =KLSM
 -----END PGP SIGNATURE-----

Merge tag 'v4.6-rc6' into x86/asm, to refresh the tree

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-05 08:35:00 +02:00
Peng Fan
3ca3712a42 iommu/arm-smmu: Clear cache lock bit of ACR
According MMU-500r2 TRM, section 3.7.1 Auxiliary Control registers,
You can modify ACTLR only when the ACR.CACHE_LOCK bit is 0.

So before clearing ARM_MMU500_ACTLR_CPRE of each context bank,
need clear CACHE_LOCK bit of ACR register first.

Since CACHE_LOCK bit is only present in MMU-500r2 onwards,
need to check the major number of IDR7.

Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Peng Fan <van.freenix@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-05-03 18:23:04 +01:00
Robin Murphy
b7862e3559 iommu/arm-smmu: Support SMMUv1 64KB supplement
The 64KB Translation Granule Supplement to the SMMUv1 architecture
allows an SMMUv1 implementation to support 64KB pages for stage 2
translations, using a constrained VMSAv8 descriptor format limited
to 40-bit addresses. Now that we can freely mix and match context
formats, we can actually handle having 4KB pages via an AArch32
context but 64KB pages via an AArch64 context, so plumb it in.

It is assumed that any implementations will have hardware capabilities
matching the format constraints, thus obviating the need for excessive
sanity-checking; this is the case for MMU-401, the only ARM Ltd.
implementation.

CC: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-05-03 18:23:03 +01:00
Robin Murphy
7602b87106 iommu/arm-smmu: Decouple context format from kernel config
The way the driver currently forces an AArch32 or AArch64 context format
based on the kernel config and SMMU architecture version is suboptimal,
in that it makes it very hard to support oddball mix-and-match cases
like the SMMUv1 64KB supplement, or situations where the reduced table
depth of an AArch32 short descriptor context may be desirable under an
AArch64 kernel. It also only happens to work on current implementations
which do support all the relevant formats.

Introduce an explicit notion of context format, so we can manage that
independently and get rid of the inflexible #ifdeffery.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-05-03 18:23:03 +01:00
Robin Murphy
f9a05f05b1 iommu/arm-smmu: Tidy up 64-bit/atomic I/O accesses
With {read,write}q_relaxed now able to fall back to the common
nonatomic-hi-lo helper, make use of that so that we don't have to
open-code our own. In the process, also convert the other remaining
split accesses, and repurpose the custom accessor to smooth out the
couple of troublesome instances where we really want to avoid
nonatomic writes (and a 64-bit access is unnecessary in the 32-bit
context formats we would use on a 32-bit CPU).

This paves the way for getting rid of some of the assumptions currently
baked into the driver which make it really awkward to use 32-bit context
formats with SMMUv2 under a 64-bit kernel.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-05-03 18:23:03 +01:00
Robin Murphy
f0cfffc48c iommu/arm-smmu: Work around MMU-500 prefetch errata
MMU-500 erratum #841119 is tickled by a particular set of circumstances
interacting with the next-page prefetcher. Since said prefetcher is
quite dumb and actually detrimental to performance in some cases (by
causing unwanted TLB evictions for non-sequential access patterns), we
lose very little by turning it off, and what we gain is a guarantee that
the erratum is never hit.

As a bonus, the same workaround will also prevent erratum #826419 once
v7 short descriptor support is implemented.

CC: Catalin Marinas <catalin.marinas@arm.com>
CC: Will Deacon <will.deacon@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-05-03 18:23:02 +01:00
Robin Murphy
e086d912d4 iommu/arm-smmu: Convert ThunderX workaround to new method
With a framework for implementation-specific funtionality in place, the
currently-FDT-dependent ThunderX workaround gets to be the first user.

Acked-by: Tirumalesh Chalamarla <tchalamarla@caviumnetworks.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-05-03 18:23:02 +01:00
Robin Murphy
67b65a3fb8 iommu/arm-smmu: Differentiate specific implementations
As the inevitable reality of implementation-specific errata workarounds
begin to accrue alongside our integration quirk handling, it's about
time the driver had a decent way of keeping track. Extend the per-SMMU
data so we can identify specific implementations in an efficient and
firmware-agnostic manner.

Acked-by: Tirumalesh Chalamarla <tchalamarla@caviumnetworks.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-05-03 18:23:01 +01:00
Tirumalesh Chalamarla
1bd37a6835 iommu/arm-smmu: Workaround for ThunderX erratum #27704
Due to erratum #27704, the CN88xx SMMUv2 implementation supports only
shared ASID and VMID numberspaces.

This patch ensures that ASID and VMIDs are unique across all SMMU
instances on affected Cavium systems.

Signed-off-by: Tirumalesh Chalamarla <tchalamarla@caviumnetworks.com>
Signed-off-by: Akula Geethasowjanya <Geethasowjanya.Akula@caviumnetworks.com>
[will: commit message, comments and formatting]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-05-03 18:23:01 +01:00
Tirumalesh Chalamarla
4e3e9b6997 iommu/arm-smmu: Add support for 16 bit VMID
This patch adds support for 16-bit VMIDs on implementations of SMMUv2
that support it.

Signed-off-by: Tirumalesh Chalamarla <tchalamarla@caviumnetworks.com>
[will: commit messsage and comments]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-05-03 18:23:00 +01:00
Joerg Roedel
fd6c50ee3a iommu/amd: Move get_device_id() and friends to beginning of file
They will be needed there later.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-21 18:24:02 +02:00
Joerg Roedel
9ee35e4c6f iommu/amd: Don't use IS_ERR_VALUE to check integer values
Use the better 'var < 0' check.

Fixes: 7aba6cb9ee9d ('iommu/amd: Make call-sites of get_device_id aware of its return value')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-21 18:21:31 +02:00
Robin Murphy
9800699c64 iommu/arm-smmu: Don't allocate resources for bypass domains
Until we get fully plumbed into of_iommu_configure, our default
IOMMU_DOMAIN_DMA domains just bypass translation. Since we achieve that
by leaving the stream table entries set to bypass instead of pointing at
a translation context, the context bank we allocate for the domain is
completely wasted. Context banks are typically a rather limited
resource, so don't hog ones we don't need.

Reported-by: Eric Auger <eric.auger@linaro.org>
Tested-by: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-21 16:47:32 +02:00
Will Deacon
5f634956cc iommu/arm-smmu: Fix stream-match conflict with IOMMU_DOMAIN_DMA
Commit cbf8277ef456 ("iommu/arm-smmu: Treat IOMMU_DOMAIN_DMA as bypass
for now") ignores requests to attach a device to the default domain
since, without IOMMU-basked DMA ops available everywhere, the default
domain will just lead to unexpected transaction faults being reported.

Unfortunately, the way this was implemented on SMMUv2 causes a
regression with VFIO PCI device passthrough under KVM on AMD Seattle.
On this system, the host controller device is associated with both a
pci_dev *and* a platform_device, and can therefore end up with duplicate
SMR entries, resulting in a stream-match conflict at runtime.

This patch amends the original fix so that attaching to IOMMU_DOMAIN_DMA
is rejected even before configuring the SMRs. This restores the old
behaviour for now, but we'll need to look at handing host controllers
specially when we come to supporting the default domain fully.

Reported-by: Eric Auger <eric.auger@linaro.org>
Tested-by: Eric Auger <eric.auger@linaro.org>
Tested-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-21 16:47:32 +02:00
Omer Peleg
22e2f9fa63 iommu/vt-d: Use per-cpu IOVA caching
Commit 9257b4a2 ('iommu/iova: introduce per-cpu caching to iova allocation')
introduced per-CPU IOVA caches to massively improve scalability. Use them.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased, cleaned up and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
[dwmw2: split out VT-d part into a separate patch]
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:44:48 -04:00
Omer Peleg
9257b4a206 iommu/iova: introduce per-cpu caching to iova allocation
IOVA allocation has two problems that impede high-throughput I/O.
First, it can do a linear search over the allocated IOVA ranges.
Second, the rbtree spinlock that serializes IOVA allocations becomes
contended.

Address these problems by creating an API for caching allocated IOVA
ranges, so that the IOVA allocator isn't accessed frequently.  This
patch adds a per-CPU cache, from which CPUs can alloc/free IOVAs
without taking the rbtree spinlock.  The per-CPU caches are backed by
a global cache, to avoid invoking the (linear-time) IOVA allocator
without needing to make the per-CPU cache size excessive.  This design
is based on magazines, as described in "Magazines and Vmem: Extending
the Slab Allocator to Many CPUs and Arbitrary Resources" (currently
available at https://www.usenix.org/legacy/event/usenix01/bonwick.html)

Adding caching on top of the existing rbtree allocator maintains the
property that IOVAs are densely packed in the IO virtual address space,
which is important for keeping IOMMU page table usage low.

To keep the cache size reasonable, we bound the IOVA space a CPU can
cache by 32 MiB (we cache a bounded number of IOVA ranges, and only
ranges of size <= 128 KiB).  The shared global cache is bounded at
4 MiB of IOVA space.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased, cleaned up and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
[dwmw2: split out VT-d part into a separate patch]
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:42:24 -04:00
Omer Peleg
2aac630429 iommu/vt-d: change intel-iommu to use IOVA frame numbers
Make intel-iommu map/unmap/invalidate work with IOVA pfns instead of
pointers to "struct iova". This avoids using the iova struct from the IOVA
red-black tree and the resulting explicit find_iova() on unmap.

This patch will allow us to cache IOVAs in the next patch, in order to
avoid rbtree operations for the majority of map/unmap operations.

Note: In eliminating the find_iova() operation, we have also eliminated
the sanity check previously done in the unmap flow. Arguably, this was
overhead that is better avoided in production code, but it could be
brought back as a debug option for driver development.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased, fixed to not break iova api, and reworded
 the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:07:22 -04:00
Omer Peleg
0824c5920b iommu/vt-d: avoid dev iotlb logic for domains with no dev iotlbs
This patch avoids taking the device_domain_lock in iommu_flush_dev_iotlb()
for domains with no dev iotlb devices.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[gvdl@google.com: fixed locking issues]
Signed-off-by: Godfrey van der Linden <gvdl@google.com>
[mad@cs.technion.ac.il: rebased and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:06:15 -04:00
Omer Peleg
769530e4ba iommu/vt-d: only unmap mapped entries
Current unmap implementation unmaps the entire area covered by the IOVA
range, which is a power-of-2 aligned region. The corresponding map,
however, only maps those pages originally mapped by the user. This
discrepancy can lead to unmapping of already unmapped entries, which is
unneeded work.

With this patch, only mapped pages are unmapped. This is also a baseline
for a map/unmap implementation based on IOVAs and not iova structures,
which will allow caching.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:06:01 -04:00
Omer Peleg
f5c0c08b1e iommu/vt-d: correct flush_unmaps pfn usage
Change flush_unmaps() to correctly pass iommu_flush_iotlb_psi()
dma addresses.  (x86_64 mm and dma have the same size for pages
at the moment, but this usage improves consistency.)

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:05:56 -04:00
Omer Peleg
aa4732406e iommu/vt-d: per-cpu deferred invalidation queues
The IOMMU's IOTLB invalidation is a costly process.  When iommu mode
is not set to "strict", it is done asynchronously. Current code
amortizes the cost of invalidating IOTLB entries by batching all the
invalidations in the system and performing a single global invalidation
instead. The code queues pending invalidations in a global queue that
is accessed under the global "async_umap_flush_lock" spinlock, which
can result is significant spinlock contention.

This patch splits this deferred queue into multiple per-cpu deferred
queues, and thus gets rid of the "async_umap_flush_lock" and its
contention.  To keep existing deferred invalidation behavior, it still
invalidates the pending invalidations of all CPUs whenever a CPU
reaches its watermark or a timeout occurs.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased, cleaned up and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:05:24 -04:00
Omer Peleg
314f1dc140 iommu/vt-d: refactoring of deferred flush entries
Currently, deferred flushes' info is striped between several lists in
the flush tables. Instead, move all information about a specific flush
to a single entry in this table.

This patch does not introduce any functional change.

Signed-off-by: Omer Peleg <omer@cs.technion.ac.il>
[mad@cs.technion.ac.il: rebased and reworded the commit message]
Signed-off-by: Adam Morrison <mad@cs.technion.ac.il>
Reviewed-by: Shaohua Li <shli@fb.com>
Reviewed-by: Ben Serebrin <serebrin@google.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2016-04-20 15:05:20 -04:00
Joerg Roedel
cb6c27bb09 iommu/arm-smmu: Make use of phandle iterators in device-tree parsing
Remove the usage of of_parse_phandle_with_args() and replace
it by the phandle-iterator implementation so that we can
parse out all of the potentially present 128 stream-ids.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Rob Herring <robh@kernel.org>
2016-04-19 17:25:15 -05:00
Ingo Molnar
6666ea558b Linux 4.6-rc4
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJXFELfAAoJEHm+PkMAQRiGRYIH+wWsUva7TR9arN1ZrURvI17b
 KqyQH8Ov9zJBsIaq/rFXOr5KfNgx7BU9BL9h7QkBy693HXTWf+GTZ1czHM4N12C3
 0ZdHGrLwTHo2zdisiQaFORZSfhSVTUNGXGHXw13bUMgEqatPgkozXEnsvXXNdt1Z
 HtlcuJn3pcj+QIY7qDXZgTLTwgn248hi1AgNag+ntFcWiz21IYaMIi7/mCY9QUIi
 AY+Y3hqFQM7/8cVyThGS5wZPTg1YzdhsLJpoCk0TbS8FvMEnA+ylcTgc15C78bwu
 AxOwM3OCmH4gMsd7Dd/O+i9lE3K6PFrgzdDisYL3P7eHap+EdiLDvVzPDPPx0xg=
 =Q7r3
 -----END PGP SIGNATURE-----

Merge tag 'v4.6-rc4' into x86/asm, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-04-19 10:38:52 +02:00
Dan Carpenter
2d8e1f039d iommu/amd: Signedness bug in acpihid_device_group()
"devid" needs to be signed for the error handling to work.

Fixes: b097d11a0fa3f ('iommu/amd: Manage iommu_group for ACPI HID devices')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-15 12:09:10 +02:00
Borislav Petkov
93984fbd4e x86/cpufeature: Replace cpu_has_apic with boot_cpu_has() usage
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: iommu@lists.linux-foundation.org
Cc: linux-pm@vger.kernel.org
Cc: oprofile-list@lists.sf.net
Link: http://lkml.kernel.org/r/1459801503-15600-8-git-send-email-bp@alien8.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-04-13 11:37:41 +02:00
Jacek Lawrynowicz
338c3149a2 PCI: Add support for multiple DMA aliases
Solve IOMMU support issues with PCIe non-transparent bridges that use
Requester ID look-up tables (RID-LUT), e.g., the PEX8733.

The NTB connects devices in two independent PCI domains.  Devices separated
by the NTB are not able to discover each other.  A PCI packet being
forwared from one domain to another has to have its RID modified so it
appears on correct bus and completions are forwarded back to the original
domain through the NTB.  The RID is translated using a preprogrammed table
(LUT) and the PCI packet propagates upstream away from the NTB.  If the
destination system has IOMMU enabled, the packet will be discarded because
the new RID is unknown to the IOMMU.  Adding a DMA alias for the new RID
allows IOMMU to properly recognize the packet.

Each device behind the NTB has a unique RID assigned in the RID-LUT.  The
current DMA alias implementation supports only a single alias, so it's not
possible to support mutiple devices behind the NTB when IOMMU is enabled.

Enable all possible aliases on a given bus (256) that are stored in a
bitset.  Alias devfn is directly translated to a bit number.  The bitset is
not allocated for devices that have no need for DMA aliases.

More details can be found in the following article:
http://www.plxtech.com/files/pdf/technical/expresslane/RTC_Enabling%20MulitHostSystemDesigns.pdf

Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@intel.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: David Woodhouse <David.Woodhouse@intel.com>
Acked-by: Joerg Roedel <jroedel@suse.de>
2016-04-11 14:34:32 -05:00
Joerg Roedel
e315604834 iommu/amd: Fix checking of pci dma aliases
Commit 61289cb ('iommu/amd: Remove old alias handling code')
removed the old alias handling code from the AMD IOMMU
driver because this is now handled by the IOMMU core code.

But this also removed the handling of PCI aliases, which is
not handled by the core code. This caused issues with PCI
devices that have hidden PCIe-to-PCI bridges that rewrite
the request-id.

Fix this bug by re-introducing some of the removed functions
from commit 61289cbaf6c8 and add a alias field
'struct iommu_dev_data'. This field carrys the return value
of the get_alias() function and uses that instead of the
amd_iommu_alias_table[] array in the code.

Fixes: 61289cbaf6c8 ('iommu/amd: Remove old alias handling code')
Cc: stable@vger.kernel.org # v4.4+
Tested-by: Tomasz Golinski <tomaszg@math.uwb.edu.pl>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-11 16:07:51 +02:00
Robin Murphy
e88ccab12a iommu/io-pgtable-arm-v7s: Support IOMMU_MMIO flag
Teach the short-descriptor format to create Device mappings when asked.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 15:07:50 +02:00
Robin Murphy
fb948251e4 iommu/io-pgtable-arm: Support IOMMU_MMIO flag
Teach the LPAE format to create Device mappings when asked.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 15:07:50 +02:00
Dan Carpenter
0b74ecdfbe iommu/vt-d: Silence an uninitialized variable warning
My static checker complains that "dma_alias" is uninitialized unless we
are dealing with a pci device.  This is true but harmless.  Anyway, we
can flip the condition around to silence the warning.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 14:51:47 +02:00
John Keeping
fbedd9b990 iommu/rockchip: Fix "is stall active" check
Since commit cd6438c5f844 ("iommu/rockchip: Reconstruct to support multi
slaves") rk_iommu_is_stall_active() always returns false because the
bitwise AND operates on the boolean flag promoted to an integer and a
value that is either zero or BIT(2).

Explicitly convert the right-hand value to a boolean so that both sides
are guaranteed to be either zero or one.

rk_iommu_is_paging_enabled() does not suffer from the same problem since
RK_MMU_STATUS_PAGING_ENABLED is BIT(0), but let's apply the same change
for consistency and to make it clear that it's correct without needing
to lookup the value.

Fixes: cd6438c5f844 ("iommu/rockchip: Reconstruct to support multi slaves")
Signed-off-by: John Keeping <john@metanate.com>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Tested-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 14:50:18 +02:00
Andy Fleming
a0d284d2b1 powerpc: Fix incorrect PPC32 PAMU dependency
The Freescale PAMU can be enabled on both 32 and 64-bit
Power chips. Commit 477ab7a19ce restricted PAMU to PPC32.
PPC covers both.

Signed-off-by: Andy Fleming <afleming@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 14:45:59 +02:00
Joerg Roedel
eebb8034a5 iommu: Don't overwrite domain pointer when there is no default_domain
IOMMU drivers that do not support default domains, but make
use of the the group->domain pointer can get that pointer
overwritten with NULL on device add/remove.

Make sure this can't happen by only overwriting the domain
pointer when it is NULL.

Cc: stable@vger.kernel.org # v4.4+
Fixes: 1228236de5f9 ('iommu: Move default domain allocation to iommu_group_get_for_dev()')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 14:33:03 +02:00
Wan Zongshun
9a4d3bf56c iommu/amd: Set AMD iommu callbacks for amba bus
AMD Uart DMA belongs to ACPI HID type device, and its driver
is basing on AMBA Bus, need also IOMMU support.

This patch is just to set the AMD iommu callbacks for amba bus.

Signed-off-by: Wan Zongshun <Vincent.Wan@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 13:29:42 +02:00
Wan Zongshun
b097d11a0f iommu/amd: Manage iommu_group for ACPI HID devices
This patch creates a new function for finding or creating an IOMMU
group for acpihid(ACPI Hardware ID) device.

The acpihid devices with the same devid will be put into same group and
there will have the same domain id and share the same page table.

Signed-off-by: Wan Zongshun <Vincent.Wan@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 13:29:42 +02:00
Wan Zongshun
2bf9a0a127 iommu/amd: Add iommu support for ACPI HID devices
Current IOMMU driver make assumption that the downstream devices are PCI.
With the newly added ACPI-HID IVHD device entry support, this is no
longer true. This patch is to add dev type check and to distinguish the
pci and acpihid device code path.

Signed-off-by: Wan Zongshun <Vincent.Wan@amd.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 13:29:42 +02:00
Wan Zongshun
7aba6cb9ee iommu/amd: Make call-sites of get_device_id aware of its return value
This patch is to make the call-sites of get_device_id aware of its
return value.

Signed-off-by: Wan Zongshun <Vincent.Wan@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 13:29:41 +02:00
Suravee Suthikulpanit
ca3bf5d47c iommu/amd: Introduces ivrs_acpihid kernel parameter
This patch introduces a new kernel parameter, ivrs_acpihid.
This is used to override existing ACPI-HID IVHD device entry,
or add an entry in case it is missing in the IVHD.

Signed-off-by: Wan Zongshun <Vincent.Wan@amd.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 13:29:41 +02:00
Wan Zongshun
2a0cb4e2d4 iommu/amd: Add new map for storing IVHD dev entry type HID
This patch introduces acpihid_map, which is used to store
the new IVHD device entry extracted from BIOS IVRS table.

It also provides a utility function add_acpi_hid_device(),
to add this types of devices to the map.

Signed-off-by: Wan Zongshun <Vincent.Wan@amd.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 13:29:41 +02:00
Suravee Suthikulpanit
8c7142f56f iommu/amd: Use the most comprehensive IVHD type that the driver can support
The IVRS in more recent AMD system usually contains multiple
IVHD block types (e.g. 0x10, 0x11, and 0x40) for each IOMMU.
The newer IVHD types provide more information (e.g. new features
specified in the IOMMU spec), while maintain compatibility with
the older IVHD type.

Having multiple IVHD type allows older IOMMU drivers to still function
(e.g. using the older IVHD type 0x10) while the newer IOMMU driver can use
the newer IVHD types (e.g. 0x11 and 0x40). Therefore, the IOMMU driver
should only make use of the newest IVHD type that it can support.

This patch adds new logic to determine the highest level of IVHD type
it can support, and use it throughout the to initialize the driver.
This requires adding another pass to the IVRS parsing to determine
appropriate IVHD type (see function get_highest_supported_ivhd_type())
before parsing the contents.

[Vincent: fix the build error of IVHD_DEV_ACPI_HID flag not found]

Signed-off-by: Wan Zongshun <vincent.wan@amd.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 13:29:41 +02:00
Suravee Suthikulpanit
ac7ccf6765 iommu/amd: Modify ivhd_header structure to support type 11h and 40h
This patch modifies the existing struct ivhd_header,
which currently only support IVHD type 0x10, to add
new fields from IVHD type 11h and 40h.

It also modifies the pointer calculation to allow
support for IVHD type 11h and 40h

Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 13:29:41 +02:00
Suravee Suthikulpanit
7d7d38afb3 iommu/amd: Adding Extended Feature Register check for PC support
The IVHD header type 11h and 40h introduce the PCSup bit in
the EFR Register Image bit fileds. This should be used to
determine the IOMMU performance support instead of relying
on the PNCounters and PNBanks.

Note also that the PNCouters and PNBanks bits in the IOMMU
attributes field of IVHD headers type 11h are incorrectly
programmed on some systems.

So, we should not rely on it to determine the performance
counter/banks size. Instead, these values should be read
from the MMIO Offset 0030h IOMMU Extended Feature Register.

Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-07 13:29:40 +02:00
Suman Anna
a5c0e0b4ac iommu/omap: Align code with open parenthesis
This patch fixes one existing alignment checkpatch check
warning of the type "Alignment should match open parenthesis"
in the OMAP IOMMU debug source file.

Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-05 17:53:20 +02:00
Suman Anna
433c434a26 iommu/omap: Use WARN_ON for page table alignment check
The OMAP IOMMU page table needs to be aligned on a 16K boundary,
and the current code uses a BUG_ON on the alignment sanity check
in the .domain_alloc() ops implementation. Replace this with a
less severe WARN_ON and bail out gracefully.

Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-05 17:53:20 +02:00
Suman Anna
7c1ab60008 iommu/omap: Replace BUG() in iopgtable_store_entry_core()
The iopgtable_store_entry_core() function uses a BUG() statement
for an unsupported page size entry programming. Replace this with
a less severe WARN_ON() and perform a graceful bailout on error.

Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-05 17:53:20 +02:00
Suman Anna
521f40823e iommu/omap: Remove iopgtable_clear_entry_all() from driver remove
The function iopgtable_clear_entry_all() is used for clearing all
the page table entries. These entries are neither created nor
initialized during the OMAP IOMMU driver probe, and are managed
only when a client device attaches to the IOMMU. So, there is no
need to invoke this function on a driver remove.

Removing this fixes a NULL pointer dereference crash if the IOMMU
device is unbound from the driver with no client device attached
to the IOMMU device.

Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-05 17:53:19 +02:00
Michael S. Tsirkin
3d1a2442d2 x86/vt-d: Fix comment for dma_pte_free_pagetable()
dma_pte_free_pagetable no longer depends on last level ptes
being clear, it clears them itself.  Fix up the comment to
match.

Cc: Jiang Liu <jiang.liu@linux.intel.com>
Suggested-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-05 17:00:37 +02:00
Tomeu Vizoso
8d7f2d84ed iommu/rockchip: Don't feed NULL res pointers to devres
If we do, devres prints a "invalid resource" string in the error
loglevel.

Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Reviewed-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-05 16:27:07 +02:00
Alex Williamson
a0fe14d7dc iommu/vt-d: Improve fault handler error messages
Remove new line in error logs, avoid duplicate and explicit pr_fmt.

Suggested-by: Joe Perches <joe@perches.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Fixes: 0ac2491f57af ('x86, dmar: move page fault handling code to dmar.c')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-05 16:22:42 +02:00