Commit Graph

157402 Commits

Author SHA1 Message Date
Martin Schwidefsky
9de7d833e3 s390/tlb: Convert to generic mmu_gather
No change in behavior intended.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: aneesh.kumar@linux.vnet.ibm.com
Cc: heiko.carstens@de.ibm.com
Cc: linux@armlinux.org.uk
Cc: npiggin@gmail.com
Cc: will.deacon@arm.com
Link: http://lkml.kernel.org/r/20180918125151.31744-3-schwidefsky@de.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03 10:32:57 +02:00
Martin Schwidefsky
952a31c9e6 asm-generic/tlb: Introduce CONFIG_HAVE_MMU_GATHER_NO_GATHER=y
Add the Kconfig option HAVE_MMU_GATHER_NO_GATHER to the generic
mmu_gather code. If the option is set the mmu_gather will not
track individual pages for delayed page free anymore. A platform
that enables the option needs to provide its own implementation
of the __tlb_remove_page_size() function to free pages.

No change in behavior intended.

Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: aneesh.kumar@linux.vnet.ibm.com
Cc: heiko.carstens@de.ibm.com
Cc: linux@armlinux.org.uk
Cc: npiggin@gmail.com
Link: http://lkml.kernel.org/r/20180918125151.31744-2-schwidefsky@de.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03 10:32:55 +02:00
Peter Zijlstra
6137fed082 arch/tlb: Clean up simple architectures
For the architectures that do not implement their own tlb_flush() but
do already use the generic mmu_gather, there are two options:

 1) the platform has an efficient flush_tlb_range() and
    asm-generic/tlb.h doesn't need any overrides at all.

 2) the platform lacks an efficient flush_tlb_range() and
    we select MMU_GATHER_NO_RANGE to minimize full invalidates.

Convert all 'simple' architectures to one of these two forms.

alpha:	    has no range invalidate -> 2
arc:	    already used flush_tlb_range() -> 1
c6x:	    has no range invalidate -> 2
hexagon:    has an efficient flush_tlb_range() -> 1
            (flush_tlb_mm() is in fact a full range invalidate,
	     so no need to shoot down everything)
m68k:	    has inefficient flush_tlb_range() -> 2
microblaze: has no flush_tlb_range() -> 2
mips:	    has efficient flush_tlb_range() -> 1
	    (even though it currently seems to use flush_tlb_mm())
nds32:	    already uses flush_tlb_range() -> 1
nios2:	    has inefficient flush_tlb_range() -> 2
	    (no limit on range iteration)
openrisc:   has inefficient flush_tlb_range() -> 2
	    (no limit on range iteration)
parisc:	    already uses flush_tlb_range() -> 1
sparc32:    already uses flush_tlb_range() -> 1
unicore32:  has inefficient flush_tlb_range() -> 2
	    (no limit on range iteration)
xtensa:	    has efficient flush_tlb_range() -> 1

Note this also fixes a bug in the existing code for a number
platforms. Those platforms that did:

  tlb_end_vma() -> if (!full_mm) flush_tlb_*()
  tlb_flush -> if (full_mm) flush_tlb_mm()

missed the case of shift_arg_pages(), which doesn't have @fullmm set,
nor calls into tlb_*vma(), but still frees page-tables and thus needs
an invalidate. The new code handles this by detecting a non-empty
range, and either issuing the matching range invalidate or a full
invalidate, depending on the capabilities.

No change in behavior intended.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Ley Foon Tan <lftan@altera.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03 10:32:54 +02:00
Peter Zijlstra
7bb8709d6a um/tlb: Convert to generic mmu_gather
Generic mmu_gather provides the simple flush_tlb_range() based range
tracking mmu_gather UM needs.

No change in behavior intended.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03 10:32:52 +02:00
Peter Zijlstra
c5b27a889d sh/tlb: Convert SH to generic mmu_gather
Generic mmu_gather provides everything SH needs (range tracking and
cache coherency).

No change in behavior intended.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rich Felker <dalias@libc.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03 10:32:51 +02:00
Peter Zijlstra
e154700774 ia64/tlb: Convert to generic mmu_gather
Generic mmu_gather provides everything ia64 needs (range tracking).

No change in behavior intended.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03 10:32:49 +02:00
Peter Zijlstra
b78180b97d arm/tlb: Convert to generic mmu_gather
Generic mmu_gather provides everything that ARM needs:

 - range tracking
 - RCU table free
 - VM_EXEC tracking
 - VIPT cache flushing

The one notable curiosity is the 'funny' range tracking for classical
ARM in __pte_free_tlb().

No change in behavior intended.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03 10:32:48 +02:00
Peter Zijlstra
96bc9567cb asm-generic/tlb, arch: Invert CONFIG_HAVE_RCU_TABLE_INVALIDATE
Make issuing a TLB invalidate for page-table pages the normal case.

The reason is twofold:

 - too many invalidates is safer than too few,
 - most architectures use the linux page-tables natively
   and would thus require this.

Make it an opt-out, instead of an opt-in.

No change in behavior intended.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03 10:32:47 +02:00
Peter Zijlstra
5f307be18b asm-generic/tlb, arch: Provide generic tlb_flush() based on flush_tlb_range()
Provide a generic tlb_flush() implementation that relies on
flush_tlb_range(). This is a little awkward because flush_tlb_range()
assumes a VMA for range invalidation, but we no longer have one.

Audit of all flush_tlb_range() implementations shows only vma->vm_mm
and vma->vm_flags are used, and of the latter only VM_EXEC (I-TLB
invalidates) and VM_HUGETLB (large TLB invalidate) are used.

Therefore, track VM_EXEC and VM_HUGETLB in two more bits, and create a
'fake' VMA.

This allows architectures that have a reasonably efficient
flush_tlb_range() to not require any additional effort.

No change in behavior intended.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03 10:32:42 +02:00
Peter Zijlstra
e7fd28a706 asm-generic/tlb, arch: Provide generic VIPT cache flush
The one obvious thing SH and ARM want is a sensible default for
tlb_start_vma(). (also: https://lkml.org/lkml/2004/1/15/6 )

Avoid all VIPT architectures providing their own tlb_start_vma()
implementation and rely on architectures to provide a no-op
flush_cache_range() when it is not relevant.

This patch makes tlb_start_vma() default to flush_cache_range(), which
should be right and sufficient. The only exceptions that I found where
(oddly):

  - m68k-mmu
  - sparc64
  - unicore

Those architectures appear to have flush_cache_range(), but their
current tlb_start_vma() does not call it.

No change in behavior intended.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Miller <davem@davemloft.net>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03 10:32:41 +02:00
Peter Zijlstra
ed6a79352c asm-generic/tlb, arch: Provide CONFIG_HAVE_MMU_GATHER_PAGE_SIZE
Move the mmu_gather::page_size things into the generic code instead of
PowerPC specific bits.

No change in behavior intended.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-04-03 10:32:40 +02:00
Linus Torvalds
63fc9c2348 A collection of x86 and ARM bugfixes, and some improvements to documentation.
On top of this, a cleanup of kvm_para.h headers, which were exported by
 some architectures even though they not support KVM at all.  This is
 responsible for all the Kbuild changes in the diffstat.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJcoM5VAAoJEL/70l94x66DU3EH/A8sYdsfeqALWElm2Sy9TYas
 mntz+oTWsl3vDy8s8zp1ET2NpF7oBlBEMmCWhVEJaD+1qW3VpTRAseR3Zr9ML9xD
 k+BQM8SKv47o86ZN+y4XALl30Ckb3DXh/X1xsrV5hF6J3ofC+Ce2tF560l8C9ygC
 WyHDxwNHMWVA/6TyW3mhunzuVKgZ/JND9+0zlyY1LKmUQ0BQLle23gseIhhI0YDm
 B4VGIYU2Mf8jCH5Ir3N/rQ8pLdo8U7f5P/MMfgXQafksvUHJBg6B6vOhLJh94dLh
 J2wixYp1zlT0drBBkvJ0jPZ75skooWWj0o3otEA7GNk/hRj6MTllgfL5SajTHZg=
 =/A7u
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull KVM fixes from Paolo Bonzini:
 "A collection of x86 and ARM bugfixes, and some improvements to
  documentation.

  On top of this, a cleanup of kvm_para.h headers, which were exported
  by some architectures even though they not support KVM at all. This is
  responsible for all the Kbuild changes in the diffstat"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (28 commits)
  Documentation: kvm: clarify KVM_SET_USER_MEMORY_REGION
  KVM: doc: Document the life cycle of a VM and its resources
  KVM: selftests: complete IO before migrating guest state
  KVM: selftests: disable stack protector for all KVM tests
  KVM: selftests: explicitly disable PIE for tests
  KVM: selftests: assert on exit reason in CR4/cpuid sync test
  KVM: x86: update %rip after emulating IO
  x86/kvm/hyper-v: avoid spurious pending stimer on vCPU init
  kvm/x86: Move MSR_IA32_ARCH_CAPABILITIES to array emulated_msrs
  KVM: x86: Emulate MSR_IA32_ARCH_CAPABILITIES on AMD hosts
  kvm: don't redefine flags as something else
  kvm: mmu: Used range based flushing in slot_handle_level_range
  KVM: export <linux/kvm_para.h> and <asm/kvm_para.h> iif KVM is supported
  KVM: x86: remove check on nr_mmu_pages in kvm_arch_commit_memory_region()
  kvm: nVMX: Add a vmentry check for HOST_SYSENTER_ESP and HOST_SYSENTER_EIP fields
  KVM: SVM: Workaround errata#1096 (insn_len maybe zero on SMAP violation)
  KVM: Reject device ioctls from processes other than the VM's creator
  KVM: doc: Fix incorrect word ordering regarding supported use of APIs
  KVM: x86: fix handling of role.cr4_pae and rename it to 'gpte_size'
  KVM: nVMX: Do not inherit quadrant and invalid for the root shadow EPT
  ...
2019-03-31 08:55:59 -07:00
Linus Torvalds
915ee0da5e Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Thomas Gleixner:
 "A pile of x86 updates:

   - Prevent exceeding he valid physical address space in the /dev/mem
     limit checks.

   - Move all header content inside the header guard to prevent compile
     failures.

   - Fix the bogus __percpu annotation in this_cpu_has() which makes
     sparse very noisy.

   - Disable switch jump tables completely when retpolines are enabled.

   - Prevent leaking the trampoline address"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/realmode: Make set_real_mode_mem() static inline
  x86/cpufeature: Fix __percpu annotation in this_cpu_has()
  x86/mm: Don't exceed the valid physical address space
  x86/retpolines: Disable switch jump tables when retpolines are enabled
  x86/realmode: Don't leak the trampoline kernel address
  x86/boot: Fix incorrect ifdeffery scope
  x86/resctrl: Remove unused variable
2019-03-31 08:40:15 -07:00
Linus Torvalds
c29d85417c Merge branch 'smp-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull CPU hotplug fixes from Thomas Gleixner:
 "Two SMT/hotplug related fixes:

   - Prevent crash when HOTPLUG_CPU is disabled and the CPU bringup
     aborts. This is triggered with the 'nosmt' command line option, but
     can happen by any abort condition. As the real unplug code is not
     compiled in, prevent the fail by keeping the CPU in zombie state.

   - Enforce HOTPLUG_CPU for SMP on x86 to avoid the above situation
     completely. With 'nosmt' being a popular option it's required to
     unplug the half brought up sibling CPUs (due to the MCE wreckage)
     completely"

* 'smp-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/smp: Enforce CONFIG_HOTPLUG_CPU when SMP=y
  cpu/hotplug: Prevent crash when CPU bringup fails on CONFIG_HOTPLUG_CPU=n
2019-03-31 08:22:12 -07:00
Linus Torvalds
6536c5f2c8 powerpc fixes for 5.1 #4
Three non-regression fixes.
 
 Our optimised memcmp could read past the end of one of the buffers and
 potentially trigger a page fault leading to an oops.
 
 Some of our code to read energy management data on PowerVM had an endian bug
 leading to bogus results.
 
 When reporting a machine check exception we incorrectly reported TLB multihits
 as D-Cache multhits due to a missing entry in the array of causes.
 
 Thanks to:
   Chandan Rajendra, Gautham R. Shenoy, Mahesh Salgaonkar, Segher Boessenkool,
   Vaidyanathan Srinivasan.
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJcoJG4AAoJEFHr6jzI4aWAwTkP/02lEd3G9MTaLLJUsvPTBG1G
 lUKPzTNqoWLvcqdwDqsr4Cfftn/DQvgQRTDXzFZCDPdIhUizDSDKAw0vf49Aue4l
 T8rxOiD7O7eFezsbZ86XIKqsRerWmb44NzrE28zkgcW6LEIjJTO6xz7ne6Cd+Xfc
 SCji4PBHKSHsL5L3mOU769nm5YDjQDszePN8M6WuYAhW/l7xKbQqWUw6m1zNQf/2
 pyy+KOpy1dSANCYgORltSyL3k280G3q75RZFEpqZkI8Yz9vuPImZh41L3CeVo7PU
 ktg2t+vy36r1/BXisENPF9NUBqhxUROU3ji56N1hKOhiocm6BBETRx+e/N2cXakB
 erKljjF0PMGqjfHgS0L05ZIwqjzme+amMvFDIPmGTW98UVW4+YLViAGMPBtB/NPm
 k2uap4VLAiBOsaj4XFPsR7y9WPtUyt56JBkB06e3aftUa9D8rwBP9oxBCR9M+MJ0
 V4qGaRUF1TIeAUlngbqJ/MBUqwWw6kcoApq+JX0/kf2Wc/lNjXK1+VCXDHSL3qkh
 4+WhEWRCf8XC/uTBM+/2a1ULn6kd8hh7LLZpCTt5X3vI0wXf2wGTbejC01jfTcX3
 I+PR/w9bSlxv2FfsiQWnn49l0dV4ZrCgQzTZ4wfiaRFWxnwn3z6CemyOiXn1umu7
 NK2/Q/nnNIwqquh7nJo+
 =Ugv6
 -----END PGP SIGNATURE-----

Merge tag 'powerpc-5.1-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc fixes from Michael Ellerman:
 "Three non-regression fixes.

   - Our optimised memcmp could read past the end of one of the buffers
     and potentially trigger a page fault leading to an oops.

   - Some of our code to read energy management data on PowerVM had an
     endian bug leading to bogus results.

   - When reporting a machine check exception we incorrectly reported
     TLB multihits as D-Cache multhits due to a missing entry in the
     array of causes.

  Thanks to: Chandan Rajendra, Gautham R. Shenoy, Mahesh Salgaonkar,
  Segher Boessenkool, Vaidyanathan Srinivasan"

* tag 'powerpc-5.1-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
  powerpc/pseries/mce: Fix misleading print for TLB mutlihit
  powerpc/pseries/energy: Use OF accessor functions to read ibm,drc-indexes
  powerpc/64: Fix memcmp reading past the end of src/dest
2019-03-31 07:44:13 -07:00
Linus Torvalds
f9007cc601 Use memblock_alloc() instead of memblock_alloc_low() in
request_standard_resources(), the latter being limited to the low 4G
 memory range on arm64.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE5RElWfyWxS+3PLO2a9axLQDIXvEFAlyeaiMACgkQa9axLQDI
 XvH/dQ/+OI/klueavPAEueHruTrMYlpILzbIkXwM9tEUt/+SNPdST0Cp7nac5Zz6
 EamlhJsQFO3hiULrRmdXBUvOrl281y2INgaYms2gTI8YPZje79q/Iu/xssBZnh02
 WNyHgQlMv9J4rUIogjiRBJJSgVCAMRAbZ1yL68p26Gpnqg/8o5pcFUlJ+f9V+IGo
 RhSQ0VPvfaMxtGGKIGjeI6A0RYf9hp3UTEpv6bStwBEVZdzABjQ/7y5CcU+9Wzwz
 N4q1IyxESuy5P4pwZs+dE0L/Jj/Xh0JMKvX0wziNxfaiqpOXlyJxDWuy01nq/DIq
 ltTsBrib/5KhaiV6MtA3rAQ5CkVAKYS40Ujtk5xeOYETLXnLCx75jwSukFd0nGwR
 OSypb1BLoEI9zNzWDDdeXDqy6QCZLDLX56SzAhxhykKnrs6yvmtP5dIznGdssMeT
 FHRbXWn5nEpLbYrM+v/kF3YuH1bCFe3l62LgBWfx7MW6IOxzYRtaLL8hcwAhVypY
 Ikkt1WQgWwsrcfkr3EtmRddWWXXGce0SlhA/7+BUZw6oYXbH+RR67/sxDuo+kv3y
 9n45T5LXK2bQ80amH01r1AqP0xSAvdq1dTzehhl8ciQqXdHQmI12/BGUzcBOAfE4
 zWnTAaWCEqBy8bHyuyD15rFxiB+R21yT6QV38xr886T+CT4yxbI=
 =c25Z
 -----END PGP SIGNATURE-----

Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 fix from Catalin Marinas:
 "Use memblock_alloc() instead of memblock_alloc_low() in
  request_standard_resources(), the latter being limited to the low 4G
  memory range on arm64"

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: replace memblock_alloc_low with memblock_alloc
2019-03-29 15:44:11 -07:00
Matteo Croce
f560bd19d2 x86/realmode: Make set_real_mode_mem() static inline
Remove the unused @size argument and move it into a header file, so it
can be inlined.

 [ bp: Massage. ]

Signed-off-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Mukesh Ojha <mojha@codeaurora.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: linux-efi <linux-efi@vger.kernel.org>
Cc: platform-driver-x86@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20190328114233.27835-1-mcroce@redhat.com
2019-03-29 10:16:27 +01:00
Mahesh Salgaonkar
6f845ebec2 powerpc/pseries/mce: Fix misleading print for TLB mutlihit
On pseries, TLB multihit are reported as D-Cache Multihit. This is because
the wrongly populated mc_err_types[] array. Per PAPR, TLB error type is 0x04
and mc_err_types[4] points to "D-Cache" instead of "TLB" string. Fixup the
mc_err_types[] array.

Machine check error type per PAPR:
  0x00 = Uncorrectable Memory Error (UE)
  0x01 = SLB error
  0x02 = ERAT Error
  0x04 = TLB error
  0x05 = D-Cache error
  0x07 = I-Cache error

Fixes: 8f0b80561f ("powerpc/pseries: Display machine check error details.")
Cc: stable@vger.kernel.org # v4.20+
Reported-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-03-29 16:59:19 +11:00
Paolo Bonzini
690edec54c KVM/ARM fixes for 5.1
- Fix THP handling in the presence of pre-existing PTEs
 - Honor request for PTE mappings even when THPs are available
 - GICv4 performance improvement
 - Take the srcu lock when writing to guest-controlled ITS data structures
 - Reset the virtual PMU in preemptible context
 - Various cleanups
 -----BEGIN PGP SIGNATURE-----
 
 iQJJBAABCgAzFiEEn9UcU+C1Yxj9lZw9I9DQutE9ekMFAlycy1MVHG1hcmMuenlu
 Z2llckBhcm0uY29tAAoJECPQ0LrRPXpDlJkQAK0mjfSOyZpeYX48XqE2Vnmr0HH2
 bfwh63jv2apR4frlT9XiSV4xUEhM8n6mblnSSuXZlykR74ZxzrCb0QKROTe67jAQ
 HxJrK+9JuKZE4VTKtpx+JIyr377NosY/cx2E87Agdof/+L2tqfjKAnI6MLsVIuca
 H3F6SH2YVhcPcf0+6nKPUF5wacKlsDZfmrq3Hgk1MbgXG5UmFdjBFlcTUOFSumkK
 E4hD93VHiQUDKvJL/qRN3p940c0lVOcDxK9Gu1AhmosvdLYEZJgtZeLBvysDH6Xl
 xbq5EU5q/Gks/DEQqallaOiocREc3mZCUlF+pC1sLUHalEClEvr47hlXCLHz9RzA
 yV0wW7Foo0oZb+zQyswrCZbAhpO7Vuqd00ghSgV6n6hTS2O7t9syiSe3QCvgCkoy
 YZ+Eogx66m68bvbBqFrLwGHUlg+XSpw8ZZ5sh6d6iCVjSVp7ptbmxMgxuMQoJgVT
 jl7WbXWYdZ2f3gYlKhtN+3IGxFIAUvFeSxFxJYnwWQPEcR6gY/lBCbwGTL5WT96F
 K12V6egTau3QrG+hP+DYPwzWNPcC5oNlayOKNPlTuiwzp2PsSdBAMpEN4Kadl8m8
 cMvvRhrcpmrTZdLLQATU9O9A75M1si1a9zaLuC1icYZwYeV593BnqvBBRwS9r/1S
 ccg/6EkNpWZby3yd
 =mpQX
 -----END PGP SIGNATURE-----

Merge tag 'kvmarm-fixes-for-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm-master

KVM/ARM fixes for 5.1

- Fix THP handling in the presence of pre-existing PTEs
- Honor request for PTE mappings even when THPs are available
- GICv4 performance improvement
- Take the srcu lock when writing to guest-controlled ITS data structures
- Reset the virtual PMU in preemptible context
- Various cleanups
2019-03-28 19:07:30 +01:00
Sean Christopherson
45def77ebf KVM: x86: update %rip after emulating IO
Most (all?) x86 platforms provide a port IO based reset mechanism, e.g.
OUT 92h or CF9h.  Userspace may emulate said mechanism, i.e. reset a
vCPU in response to KVM_EXIT_IO, without explicitly announcing to KVM
that it is doing a reset, e.g. Qemu jams vCPU state and resumes running.

To avoid corruping %rip after such a reset, commit 0967b7bf1c ("KVM:
Skip pio instruction when it is emulated, not executed") changed the
behavior of PIO handlers, i.e. today's "fast" PIO handling to skip the
instruction prior to exiting to userspace.  Full emulation doesn't need
such tricks becase re-emulating the instruction will naturally handle
%rip being changed to point at the reset vector.

Updating %rip prior to executing to userspace has several drawbacks:

  - Userspace sees the wrong %rip on the exit, e.g. if PIO emulation
    fails it will likely yell about the wrong address.
  - Single step exits to userspace for are effectively dropped as
    KVM_EXIT_DEBUG is overwritten with KVM_EXIT_IO.
  - Behavior of PIO emulation is different depending on whether it
    goes down the fast path or the slow path.

Rather than skip the PIO instruction before exiting to userspace,
snapshot the linear %rip and cancel PIO completion if the current
value does not match the snapshot.  For a 64-bit vCPU, i.e. the most
common scenario, the snapshot and comparison has negligible overhead
as VMCS.GUEST_RIP will be cached regardless, i.e. there is no extra
VMREAD in this case.

All other alternatives to snapshotting the linear %rip that don't
rely on an explicit reset announcenment suffer from one corner case
or another.  For example, canceling PIO completion on any write to
%rip fails if userspace does a save/restore of %rip, and attempting to
avoid that issue by canceling PIO only if %rip changed then fails if PIO
collides with the reset %rip.  Attempting to zero in on the exact reset
vector won't work for APs, which means adding more hooks such as the
vCPU's MP_STATE, and so on and so forth.

Checking for a linear %rip match technically suffers from corner cases,
e.g. userspace could theoretically rewrite the underlying code page and
expect a different instruction to execute, or the guest hardcodes a PIO
reset at 0xfffffff0, but those are far, far outside of what can be
considered normal operation.

Fixes: 432baf60ee ("KVM: VMX: use kvm_fast_pio_in for handling IN I/O")
Cc: <stable@vger.kernel.org>
Reported-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-28 17:29:04 +01:00
Vitaly Kuznetsov
013cc6ebbf x86/kvm/hyper-v: avoid spurious pending stimer on vCPU init
When userspace initializes guest vCPUs it may want to zero all supported
MSRs including Hyper-V related ones including HV_X64_MSR_STIMERn_CONFIG/
HV_X64_MSR_STIMERn_COUNT. With commit f3b138c5d8 ("kvm/x86: Update SynIC
timers on guest entry only") we began doing stimer_mark_pending()
unconditionally on every config change.

The issue I'm observing manifests itself as following:
- Qemu writes 0 to STIMERn_{CONFIG,COUNT} MSRs and marks all stimers as
  pending in stimer_pending_bitmap, arms KVM_REQ_HV_STIMER;
- kvm_hv_has_stimer_pending() starts returning true;
- kvm_vcpu_has_events() starts returning true;
- kvm_arch_vcpu_runnable() starts returning true;
- when kvm_arch_vcpu_ioctl_run() gets into
  (vcpu->arch.mp_state == KVM_MP_STATE_UNINITIALIZED) case:
  - kvm_vcpu_block() gets in 'kvm_vcpu_check_block(vcpu) < 0' and returns
    immediately, avoiding normal wait path;
  - -EAGAIN is returned from kvm_arch_vcpu_ioctl_run() immediately forcing
    userspace to retry.

So instead of normal wait path we get a busy loop on all secondary vCPUs
before they get INIT signal. This seems to be undesirable, especially given
that this happens even when Hyper-V extensions are not used.

Generally, it seems to be pointless to mark an stimer as pending in
stimer_pending_bitmap and arm KVM_REQ_HV_STIMER as the only thing
kvm_hv_process_stimers() will do is clear the corresponding bit. We may
just not mark disabled timers as pending instead.

Fixes: f3b138c5d8 ("kvm/x86: Update SynIC timers on guest entry only")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-28 17:29:03 +01:00
Xiaoyao Li
2bdb76c015 kvm/x86: Move MSR_IA32_ARCH_CAPABILITIES to array emulated_msrs
Since MSR_IA32_ARCH_CAPABILITIES is emualted unconditionally even if
host doesn't suppot it. We should move it to array emulated_msrs from
arry msrs_to_save, to report to userspace that guest support this msr.

Signed-off-by: Xiaoyao Li <xiaoyao.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-28 17:29:01 +01:00
Sean Christopherson
0cf9135b77 KVM: x86: Emulate MSR_IA32_ARCH_CAPABILITIES on AMD hosts
The CPUID flag ARCH_CAPABILITIES is unconditioinally exposed to host
userspace for all x86 hosts, i.e. KVM advertises ARCH_CAPABILITIES
regardless of hardware support under the pretense that KVM fully
emulates MSR_IA32_ARCH_CAPABILITIES.  Unfortunately, only VMX hosts
handle accesses to MSR_IA32_ARCH_CAPABILITIES (despite KVM_GET_MSRS
also reporting MSR_IA32_ARCH_CAPABILITIES for all hosts).

Move the MSR_IA32_ARCH_CAPABILITIES handling to common x86 code so
that it's emulated on AMD hosts.

Fixes: 1eaafe91a0 ("kvm: x86: IA32_ARCH_CAPABILITIES is always supported")
Cc: stable@vger.kernel.org
Reported-by: Xiaoyao Li <xiaoyao.li@linux.intel.com>
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-28 17:29:00 +01:00
Ben Gardon
f285c633cb kvm: mmu: Used range based flushing in slot_handle_level_range
Replace kvm_flush_remote_tlbs with kvm_flush_remote_tlbs_with_address
in slot_handle_level_range. When range based flushes are not enabled
kvm_flush_remote_tlbs_with_address falls back to kvm_flush_remote_tlbs.

This changes the behavior of many functions that indirectly use
slot_handle_level_range, iff the range based flushes are enabled. The
only potential problem I see with this is that kvm->tlbs_dirty will be
cleared less often, however the only caller of slot_handle_level_range that
checks tlbs_dirty is kvm_mmu_notifier_invalidate_range_start which
checks it and does a kvm_flush_remote_tlbs after calling
kvm_unmap_hva_range anyway.

Tested: Ran all kvm-unit-tests on a Intel Haswell machine with and
	without this patch. The patch introduced no new failures.

Signed-off-by: Ben Gardon <bgardon@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-28 17:28:57 +01:00
Masahiro Yamada
3d9683cf3b KVM: export <linux/kvm_para.h> and <asm/kvm_para.h> iif KVM is supported
I do not see any consistency about headers_install of <linux/kvm_para.h>
and <asm/kvm_para.h>.

According to my analysis of Linux 5.1-rc1, there are 3 groups:

 [1] Both <linux/kvm_para.h> and <asm/kvm_para.h> are exported

    alpha, arm, hexagon, mips, powerpc, s390, sparc, x86

 [2] <asm/kvm_para.h> is exported, but <linux/kvm_para.h> is not

    arc, arm64, c6x, h8300, ia64, m68k, microblaze, nios2, openrisc,
    parisc, sh, unicore32, xtensa

 [3] Neither <linux/kvm_para.h> nor <asm/kvm_para.h> is exported

    csky, nds32, riscv

This does not match to the actual KVM support. At least, [2] is
half-baked.

Nor do arch maintainers look like they care about this. For example,
commit 0add53713b ("microblaze: Add missing kvm_para.h to Kbuild")
exported <asm/kvm_para.h> to user-space in order to fix an in-kernel
build error.

We have two ways to make this consistent:

 [A] export both <linux/kvm_para.h> and <asm/kvm_para.h> for all
     architectures, irrespective of the KVM support

 [B] Match the header export of <linux/kvm_para.h> and <asm/kvm_para.h>
     to the KVM support

My first attempt was [A] because the code looks cleaner, but Paolo
suggested [B].

So, this commit goes with [B].

For most architectures, <asm/kvm_para.h> was moved to the kernel-space.
I changed include/uapi/linux/Kbuild so that it checks generated
asm/kvm_para.h as well as check-in ones.

After this commit, there will be two groups:

 [1] Both <linux/kvm_para.h> and <asm/kvm_para.h> are exported

    arm, arm64, mips, powerpc, s390, x86

 [2] Neither <linux/kvm_para.h> nor <asm/kvm_para.h> is exported

    alpha, arc, c6x, csky, h8300, hexagon, ia64, m68k, microblaze,
    nds32, nios2, openrisc, parisc, riscv, sh, sparc, unicore32, xtensa

Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-28 17:27:42 +01:00
Wei Yang
4d66623cfb KVM: x86: remove check on nr_mmu_pages in kvm_arch_commit_memory_region()
* nr_mmu_pages would be non-zero only if kvm->arch.n_requested_mmu_pages is
  non-zero.

* nr_mmu_pages is always non-zero, since kvm_mmu_calculate_mmu_pages()
  never return zero.

Based on these two reasons, we can merge the two *if* clause and use the
return value from kvm_mmu_calculate_mmu_pages() directly. This simplify
the code and also eliminate the possibility for reader to believe
nr_mmu_pages would be zero.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-28 17:27:19 +01:00
Krish Sadhukhan
711eff3a8f kvm: nVMX: Add a vmentry check for HOST_SYSENTER_ESP and HOST_SYSENTER_EIP fields
According to section "Checks on VMX Controls" in Intel SDM vol 3C, the
following check is performed on vmentry of L2 guests:

    On processors that support Intel 64 architecture, the IA32_SYSENTER_ESP
    field and the IA32_SYSENTER_EIP field must each contain a canonical
    address.

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-28 17:27:18 +01:00
Singh, Brijesh
05d5a48635 KVM: SVM: Workaround errata#1096 (insn_len maybe zero on SMAP violation)
Errata#1096:

On a nested data page fault when CR.SMAP=1 and the guest data read
generates a SMAP violation, GuestInstrBytes field of the VMCB on a
VMEXIT will incorrectly return 0h instead the correct guest
instruction bytes .

Recommend Workaround:

To determine what instruction the guest was executing the hypervisor
will have to decode the instruction at the instruction pointer.

The recommended workaround can not be implemented for the SEV
guest because guest memory is encrypted with the guest specific key,
and instruction decoder will not be able to decode the instruction
bytes. If we hit this errata in the SEV guest then log the message
and request a guest shutdown.

Reported-by: Venkatesh Srinivas <venkateshs@google.com>
Cc: Jim Mattson <jmattson@google.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-28 17:27:17 +01:00
Sean Christopherson
47c42e6b41 KVM: x86: fix handling of role.cr4_pae and rename it to 'gpte_size'
The cr4_pae flag is a bit of a misnomer, its purpose is really to track
whether the guest PTE that is being shadowed is a 4-byte entry or an
8-byte entry.  Prior to supporting nested EPT, the size of the gpte was
reflected purely by CR4.PAE.  KVM fudged things a bit for direct sptes,
but it was mostly harmless since the size of the gpte never mattered.
Now that a spte may be tracking an indirect EPT entry, relying on
CR4.PAE is wrong and ill-named.

For direct shadow pages, force the gpte_size to '1' as they are always
8-byte entries; EPT entries can only be 8-bytes and KVM always uses
8-byte entries for NPT and its identity map (when running with EPT but
not unrestricted guest).

Likewise, nested EPT entries are always 8-bytes.  Nested EPT presents a
unique scenario as the size of the entries are not dictated by CR4.PAE,
but neither is the shadow page a direct map.  To handle this scenario,
set cr0_wp=1 and smap_andnot_wp=1, an otherwise impossible combination,
to denote a nested EPT shadow page.  Use the information to avoid
incorrectly zapping an unsync'd indirect page in __kvm_sync_page().

Providing a consistent and accurate gpte_size fixes a bug reported by
Vitaly where fast_cr3_switch() always fails when switching from L2 to
L1 as kvm_mmu_get_page() would force role.cr4_pae=0 for direct pages,
whereas kvm_calc_mmu_role_common() would set it according to CR4.PAE.

Fixes: 7dcd575520 ("x86/kvm/mmu: check if tdp/shadow MMU reconfiguration is needed")
Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Tested-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-28 17:27:03 +01:00
Sean Christopherson
552c69b1dc KVM: nVMX: Do not inherit quadrant and invalid for the root shadow EPT
Explicitly zero out quadrant and invalid instead of inheriting them from
the root_mmu.  Functionally, this patch is a nop as we (should) never
set quadrant for a direct mapped (EPT) root_mmu and nested EPT is only
allowed if EPT is used for L1, and the root_mmu will never be invalid at
this point.

Explicitly setting flags sets the stage for repurposing the legacy
paging bits in role, e.g. nxe, cr0_wp, and sm{a,e}p_andnot_wp, at which
point 'smm' would be the only flag to be inherited from root_mmu.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-03-28 17:27:01 +01:00
Jann Horn
f6027c8109 x86/cpufeature: Fix __percpu annotation in this_cpu_has()
&cpu_info.x86_capability is __percpu, and the second argument of
x86_this_cpu_test_bit() is expected to be __percpu. Don't cast the
__percpu away and then implicitly add it again. This gets rid of 106
lines of sparse warnings with the kernel config I'm using.

Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Nadav Amit <namit@vmware.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20190328154948.152273-1-jannh@google.com
2019-03-28 17:03:11 +01:00
Linus Torvalds
bfed6d0ffc s390 update with improvements and bug fixes for 5.1-rc2
- Fix early free of the channel program in vfio
 
  - On AP device removal make sure that all messages are flushed
    with the driver still attached that queued the message
 
  - Limit brk randomization to 32MB to reduce the chance that the
    heap of ld.so is placed after the main stack
 
  - Add a rolling average for the steal time of a CPU, this will be
    needed for KVM to decide when to do busy waiting
 
  - Fix a warning in the CPU-MF code
 
  - Add a notification handler for AP configuration change to react
    faster to new AP devices
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQEcBAABCAAGBQJcnIq7AAoJEDjwexyKj9rgddUH/3VQP6BMvq2fwAsLqx8JeYgT
 082xzP2nHli3tO6m8fFHmtqrSg5KTEDfuQVafqp92LeEMKUNWQI6kRu7rXeAVBct
 M6hx21mqkm9VNjAlAjSq8IAUXP2K6/K0BMD5mYInYYYVRvJm3on4sHnkEj0kvXbm
 OGxwnNBd9UnH5g6ti2vW4cyDvs0aqj1eDbSudy5KedumQz5J2XdFPn4f4Ej6p2+t
 nuvlZFDnZ2Z4rliE3RFCuKExZR+YFZgS1urm6pcklncfvbJRsqFJ+nvhurskDUI3
 4gOp1Yv1tvGNv/cNVEtnz8g/Kg8/sI7evjQBtxhtEsV/W0sbZPnjCt+28Cf1DN4=
 =4nL7
 -----END PGP SIGNATURE-----

Merge tag 's390-5.1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 fixes from Martin Schwidefsky:
 "Improvements and bug fixes for 5.1-rc2:

   - Fix early free of the channel program in vfio

   - On AP device removal make sure that all messages are flushed with
     the driver still attached that queued the message

   - Limit brk randomization to 32MB to reduce the chance that the heap
     of ld.so is placed after the main stack

   - Add a rolling average for the steal time of a CPU, this will be
     needed for KVM to decide when to do busy waiting

   - Fix a warning in the CPU-MF code

   - Add a notification handler for AP configuration change to react
     faster to new AP devices"

* tag 's390-5.1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
  s390/cpumf: Fix warning from check_processor_id
  zcrypt: handle AP Info notification from CHSC SEI command
  vfio: ccw: only free cp on final interrupt
  s390/vtime: steal time exponential moving average
  s390/zcrypt: revisit ap device remove procedure
  s390: limit brk randomization to 32MB
2019-03-28 08:35:32 -07:00
Linus Torvalds
97c41a6bdc ARM: SoC fixes for v5.1
A couple of minor fixes only for now
 
 - Incorrect DMA channels on Renesas R-Car
 - Broadcom bcm2835 error handling fixes
 - Kconfig dependency fixes for bcm2835 and davinci
 - CPU idle wakeup fix for i.MX6
 - MMC regression on Tegra186
 - Incorrect phy settings on one imx board
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABCAAGBQJcmQBWAAoJEGCrR//JCVInXwQP/isXeyDBtT1xlJMUBU8CnIuz
 pDp1vZyEsGkLlErI9T299+sZL4XIfz1eHXiJmnQZvGMefumvim5zEvdo469Jk/Da
 9Fu/1yo4Dy6pIIkoUFp4LeQVZoEVtWHrhH9IIIWuN7XlnLWeBxVPggp64gKIVXry
 iqRa7h7hM15dsYhmeri5fLkR9J3kMLfIkZCT1m6ysYGc0LBj5a9kcf+8B5Tebo+8
 ffwiMSo3mNhsepPB1sFRDUNLzCsa3PiA/qycJlg5UTap2YkwmZ93ANHv6DbDztza
 Vgw7uFsZ04a6rZFa0jZs9On3GjxB1iLO1b8PM3dNHa2yBjprK5VYhUNh5tcIlPUL
 l5IPzJTnD6qEI/8H+kjbAyl53TYQh+YjRKnN6Khvbuec7BgMlBvLTNwZNJHGV9oo
 2feTKhdpnHt2FhE/p+5MtXf5n+a//xY99HtKLu9EBGAG1rwMq0gahjfXVnBB+XSz
 71m/anA2C9A/zNstNOlthziomenTLSQoE7RmKty7kIB6j/rzY9yOTlCcKnSgKnOD
 TU2MyIgEzvcxOmp+5wJBL4XncWX/9MjQ53GV+23NoRwIIFP9G7A4cVUykniPbugk
 9H7bJv78O+sI/rr4vEBf3Og8yQcuLMULp0Tos7gD2b4QZ1hWWSMmKKWHQK1In4+n
 3tUmvx7HfdWxHKkRMc0U
 =nCso
 -----END PGP SIGNATURE-----

Merge tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC fixes from Arnd Bergmann:
 "A couple of minor fixes only for now

   - fix for incorrect DMA channels on Renesas R-Car

   - Broadcom bcm2835 error handling fixes

   - Kconfig dependency fixes for bcm2835 and davinci

   - CPU idle wakeup fix for i.MX6

   - MMC regression on Tegra186

   - fix incorrect phy settings on one imx board"

* tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc:
  arm64: tegra: Disable CQE Support for SDMMC4 on Tegra186
  ARM: dts: nomadik: Fix polarity of SPI CS
  ARM: davinci: fix build failure with allnoconfig
  ARM: imx_v4_v5_defconfig: enable PWM driver
  ARM: imx_v6_v7_defconfig: continue compiling the pwm driver
  ARM: dts: imx6dl-yapp4: Use correct pseudo PHY address for the switch
  ARM: dts: imx6qdl: Fix typo in imx6qdl-icore-rqs.dtsi
  ARM: dts: imx6ull: Use the correct style for SPDX License Identifier
  ARM: dts: pfla02: increase phy reset duration
  ARM: imx6q: cpuidle: fix bug that CPU might not wake up at expected time
  ARM: imx51: fix a leaked reference by adding missing of_node_put
  ARM: dts: imx6dl-yapp4: Use rgmii-id phy mode on the cpu port
  arm64: bcm2835: Add missing dependency on MFD_CORE.
  ARM: dts: bcm283x: Fix hdmi hpd gpio pull
  soc: bcm: bcm2835-pm: Fix error paths of initialization.
  soc: bcm: bcm2835-pm: Fix PM_IMAGE_PERI power domain support.
  arm64: dts: renesas: r8a774c0: Fix SCIF5 DMA channels
  arm64: dts: renesas: r8a77990: Fix SCIF5 DMA channels
2019-03-28 08:23:45 -07:00
Ralph Campbell
92c77f7c4d x86/mm: Don't exceed the valid physical address space
valid_phys_addr_range() is used to sanity check the physical address range
of an operation, e.g., access to /dev/mem. It uses __pa(high_memory)
internally.

If memory is populated at the end of the physical address space, then
__pa(high_memory) is outside of the physical address space because:

   high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1;

For the comparison in valid_phys_addr_range() this is not an issue, but if
CONFIG_DEBUG_VIRTUAL is enabled, __pa() maps to __phys_addr(), which
verifies that the resulting physical address is within the valid physical
address space of the CPU. So in the case that memory is populated at the
end of the physical address space, this is not true and triggers a
VIRTUAL_BUG_ON().

Use __pa(high_memory - 1) to prevent the conversion from going beyond
the end of valid physical addresses.

Fixes: be62a32044 ("x86/mm: Limit mmap() of /dev/mem to valid physical addresses")
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Craig Bergstrom <craigb@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Hans Verkuil <hans.verkuil@cisco.com>
Cc: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sander Eikelenboom <linux@eikelenboom.it>
Cc: Sean Young <sean@mess.org>

Link: https://lkml.kernel.org/r/20190326001817.15413-2-rcampbell@nvidia.com
2019-03-28 14:13:51 +01:00
Daniel Borkmann
a9d57ef15c x86/retpolines: Disable switch jump tables when retpolines are enabled
Commit ce02ef06fc ("x86, retpolines: Raise limit for generating indirect
calls from switch-case") raised the limit under retpolines to 20 switch
cases where gcc would only then start to emit jump tables, and therefore
effectively disabling the emission of slow indirect calls in this area.

After this has been brought to attention to gcc folks [0], Martin Liska
has then fixed gcc to align with clang by avoiding to generate switch jump
tables entirely under retpolines. This is taking effect in gcc starting
from stable version 8.4.0. Given kernel supports compilation with older
versions of gcc where the fix is not being available or backported anymore,
we need to keep the extra KBUILD_CFLAGS around for some time and generally
set the -fno-jump-tables to align with what more recent gcc is doing
automatically today.

More than 20 switch cases are not expected to be fast-path critical, but
it would still be good to align with gcc behavior for versions < 8.4.0 in
order to have consistency across supported gcc versions. vmlinux size is
slightly growing by 0.27% for older gcc. This flag is only set to work
around affected gcc, no change for clang.

  [0] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=86952

Suggested-by: Martin Liska <mliska@suse.cz>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Björn Töpel<bjorn.topel@intel.com>
Cc: Magnus Karlsson <magnus.karlsson@intel.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: H.J. Lu <hjl.tools@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: David S. Miller <davem@davemloft.net>
Link: https://lkml.kernel.org/r/20190325135620.14882-1-daniel@iogearbox.net
2019-03-28 13:39:48 +01:00
Thomas Gleixner
bebd024e48 x86/smp: Enforce CONFIG_HOTPLUG_CPU when SMP=y
The SMT disable 'nosmt' command line argument is not working properly when
CONFIG_HOTPLUG_CPU is disabled. The teardown of the sibling CPUs which are
required to be brought up due to the MCE issues, cannot work. The CPUs are
then kept in a half dead state.

As the 'nosmt' functionality has become popular due to the speculative
hardware vulnerabilities, the half torn down state is not a proper solution
to the problem.

Enforce CONFIG_HOTPLUG_CPU=y when SMP is enabled so the full operation is
possible.

Reported-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Konrad Wilk <konrad.wilk@oracle.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Mukesh Ojha <mojha@codeaurora.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Rik van Riel <riel@surriel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Micheal Kelley <michael.h.kelley@microsoft.com>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: K. Y. Srinivasan <kys@microsoft.com>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20190326163811.598166056@linutronix.de
2019-03-28 13:34:58 +01:00
Thomas Richter
b6ffdf27f3 s390/cpumf: Fix warning from check_processor_id
Function __hw_perf_event_init() used a CPU variable without
ensuring CPU preemption has been disabled. This caused the
following warning in the kernel log:

  [ 7.277085] BUG: using smp_processor_id() in preemptible
                 [00000000] code: cf-csdiag/1892
  [ 7.277111] caller is cf_diag_event_init+0x13a/0x338
  [ 7.277122] CPU: 10 PID: 1892 Comm: cf-csdiag Not tainted
                 5.0.0-20190318.rc0.git0.9e1a11e0f602.300.fc29.s390x+debug #1
  [ 7.277131] Hardware name: IBM 2964 NC9 712 (LPAR)
  [ 7.277139] Call Trace:
  [ 7.277150] ([<000000000011385a>] show_stack+0x82/0xd0)
  [ 7.277161]  [<0000000000b7a71a>] dump_stack+0x92/0xd0
  [ 7.277174]  [<00000000007b7e9c>] check_preemption_disabled+0xe4/0x100
  [ 7.277183]  [<00000000001228aa>] cf_diag_event_init+0x13a/0x338
  [ 7.277195]  [<00000000002cf3aa>] perf_try_init_event+0x72/0xf0
  [ 7.277204]  [<00000000002d0bba>] perf_event_alloc+0x6fa/0xce0
  [ 7.277214]  [<00000000002dc4a8>] __s390x_sys_perf_event_open+0x398/0xd50
  [ 7.277224]  [<0000000000b9e8f0>] system_call+0xdc/0x2d8
  [ 7.277233] 2 locks held by cf-csdiag/1892:
  [ 7.277241]  #0: 00000000976f5510 (&sig->cred_guard_mutex){+.+.},
                  at: __s390x_sys_perf_event_open+0xd2e/0xd50
  [ 7.277257]  #1: 00000000363b11bd (&pmus_srcu){....},
                  at: perf_event_alloc+0x52e/0xce0

The variable is now accessed in proper context. Use
get_cpu_var()/put_cpu_var() pair to disable
preemption during access.
As the hardware authorization settings apply to all CPUs, it
does not matter which CPU is used to check the authorization setting.

Remove the event->count assignment. It is not needed as function
perf_event_alloc() allocates memory for the event with kzalloc() and
thus count is already set to zero.

Fixes: fe5908bccc ("s390/cpum_cf_diag: Add support for s390 counter facility diagnostic trace")

Signed-off-by: Thomas Richter <tmricht@linux.ibm.com>
Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
2019-03-28 09:28:42 +01:00
Linus Torvalds
1a9df9e29c Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller:
 "Fixes here and there, a couple new device IDs, as usual:

   1) Fix BQL race in dpaa2-eth driver, from Ioana Ciornei.

   2) Fix 64-bit division in iwlwifi, from Arnd Bergmann.

   3) Fix documentation for some eBPF helpers, from Quentin Monnet.

   4) Some UAPI bpf header sync with tools, also from Quentin Monnet.

   5) Set descriptor ownership bit at the right time for jumbo frames in
      stmmac driver, from Aaro Koskinen.

   6) Set IFF_UP properly in tun driver, from Eric Dumazet.

   7) Fix load/store doubleword instruction generation in powerpc eBPF
      JIT, from Naveen N. Rao.

   8) nla_nest_start() return value checks all over, from Kangjie Lu.

   9) Fix asoc_id handling in SCTP after the SCTP_*_ASSOC changes this
      merge window. From Marcelo Ricardo Leitner and Xin Long.

  10) Fix memory corruption with large MTUs in stmmac, from Aaro
      Koskinen.

  11) Do not use ipv4 header for ipv6 flows in TCP and DCCP, from Eric
      Dumazet.

  12) Fix topology subscription cancellation in tipc, from Erik Hugne.

  13) Memory leak in genetlink error path, from Yue Haibing.

  14) Valid control actions properly in packet scheduler, from Davide
      Caratti.

  15) Even if we get EEXIST, we still need to rehash if a shrink was
      delayed. From Herbert Xu.

  16) Fix interrupt mask handling in interrupt handler of r8169, from
      Heiner Kallweit.

  17) Fix leak in ehea driver, from Wen Yang"

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (168 commits)
  dpaa2-eth: fix race condition with bql frame accounting
  chelsio: use BUG() instead of BUG_ON(1)
  net: devlink: skip info_get op call if it is not defined in dumpit
  net: phy: bcm54xx: Encode link speed and activity into LEDs
  tipc: change to check tipc_own_id to return in tipc_net_stop
  net: usb: aqc111: Extend HWID table by QNAP device
  net: sched: Kconfig: update reference link for PIE
  net: dsa: qca8k: extend slave-bus implementations
  net: dsa: qca8k: remove leftover phy accessors
  dt-bindings: net: dsa: qca8k: support internal mdio-bus
  dt-bindings: net: dsa: qca8k: fix example
  net: phy: don't clear BMCR in genphy_soft_reset
  bpf, libbpf: clarify bump in libbpf version info
  bpf, libbpf: fix version info and add it to shared object
  rxrpc: avoid clang -Wuninitialized warning
  tipc: tipc clang warning
  net: sched: fix cleanup NULL pointer exception in act_mirr
  r8169: fix cable re-plugging issue
  net: ethernet: ti: fix possible object reference leak
  net: ibm: fix possible object reference leak
  ...
2019-03-27 12:22:57 -07:00
Chen Zhou
9e0a17db51 arm64: replace memblock_alloc_low with memblock_alloc
If we use "crashkernel=Y[@X]" and the start address is above 4G,
the arm64 kdump capture kernel may call memblock_alloc_low() failure
in request_standard_resources(). Replacing memblock_alloc_low() with
memblock_alloc().

[    0.000000] MEMBLOCK configuration:
[    0.000000]  memory size = 0x0000000040650000 reserved size = 0x0000000004db7f39
[    0.000000]  memory.cnt  = 0x6
[    0.000000]  memory[0x0]	[0x00000000395f0000-0x000000003968ffff], 0x00000000000a0000 bytes on node 0 flags: 0x4
[    0.000000]  memory[0x1]	[0x0000000039730000-0x000000003973ffff], 0x0000000000010000 bytes on node 0 flags: 0x4
[    0.000000]  memory[0x2]	[0x0000000039780000-0x000000003986ffff], 0x00000000000f0000 bytes on node 0 flags: 0x4
[    0.000000]  memory[0x3]	[0x0000000039890000-0x0000000039d0ffff], 0x0000000000480000 bytes on node 0 flags: 0x4
[    0.000000]  memory[0x4]	[0x000000003ed00000-0x000000003ed2ffff], 0x0000000000030000 bytes on node 0 flags: 0x4
[    0.000000]  memory[0x5]	[0x0000002040000000-0x000000207fffffff], 0x0000000040000000 bytes on node 0 flags: 0x0
[    0.000000]  reserved.cnt  = 0x7
[    0.000000]  reserved[0x0]	[0x0000002040080000-0x0000002041c4dfff], 0x0000000001bce000 bytes flags: 0x0
[    0.000000]  reserved[0x1]	[0x0000002041c53000-0x0000002042c203f8], 0x0000000000fcd3f9 bytes flags: 0x0
[    0.000000]  reserved[0x2]	[0x000000207da00000-0x000000207dbfffff], 0x0000000000200000 bytes flags: 0x0
[    0.000000]  reserved[0x3]	[0x000000207ddef000-0x000000207fbfffff], 0x0000000001e11000 bytes flags: 0x0
[    0.000000]  reserved[0x4]	[0x000000207fdf2b00-0x000000207fdfc03f], 0x0000000000009540 bytes flags: 0x0
[    0.000000]  reserved[0x5]	[0x000000207fdfd000-0x000000207ffff3ff], 0x0000000000202400 bytes flags: 0x0
[    0.000000]  reserved[0x6]	[0x000000207ffffe00-0x000000207fffffff], 0x0000000000000200 bytes flags: 0x0
[    0.000000] Kernel panic - not syncing: request_standard_resources: Failed to allocate 384 bytes
[    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.1.0-next-20190321+ #4
[    0.000000] Call trace:
[    0.000000]  dump_backtrace+0x0/0x188
[    0.000000]  show_stack+0x24/0x30
[    0.000000]  dump_stack+0xa8/0xcc
[    0.000000]  panic+0x14c/0x31c
[    0.000000]  setup_arch+0x2b0/0x5e0
[    0.000000]  start_kernel+0x90/0x52c
[    0.000000] ---[ end Kernel panic - not syncing: request_standard_resources: Failed to allocate 384 bytes ]---

Link: https://www.spinics.net/lists/arm-kernel/msg715293.html
Signed-off-by: Chen Zhou <chenzhou10@huawei.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2019-03-27 18:12:41 +00:00
Matteo Croce
b929a500d6 x86/realmode: Don't leak the trampoline kernel address
Since commit

  ad67b74d24 ("printk: hash addresses printed with %p")

at boot "____ptrval____" is printed instead of the trampoline addresses:

  Base memory trampoline at [(____ptrval____)] 99000 size 24576

Remove the print as we don't want to leak kernel addresses and this
statement is not needed anymore.

Fixes: ad67b74d24 ("printk: hash addresses printed with %p")
Signed-off-by: Matteo Croce <mcroce@redhat.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20190326203046.20787-1-mcroce@redhat.com
2019-03-27 14:22:08 +01:00
Baoquan He
0f02daed4e x86/boot: Fix incorrect ifdeffery scope
The declarations related to immovable memory handling are out of the
BOOT_COMPRESSED_MISC_H #ifdef scope, wrap them inside.

Signed-off-by: Baoquan He <bhe@redhat.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: Chao Fan <fanc.fnst@cn.fujitsu.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20190304055546.18566-1-bhe@redhat.com
2019-03-27 14:00:51 +01:00
Gautham R. Shenoy
ce9afe08e7 powerpc/pseries/energy: Use OF accessor functions to read ibm,drc-indexes
In cpu_to_drc_index() in the case when FW_FEATURE_DRC_INFO is absent,
we currently use of_read_property() to obtain the pointer to the array
corresponding to the property "ibm,drc-indexes". The elements of this
array are of type __be32, but are accessed without any conversion to
the OS-endianness, which is buggy on a Little Endian OS.

Fix this by using of_property_read_u32_index() accessor function to
safely read the elements of the array.

Fixes: e83636ac33 ("pseries/drc-info: Search DRC properties for CPU indexes")
Cc: stable@vger.kernel.org # v4.16+
Reported-by: Pavithra R. Prakash <pavrampu@in.ibm.com>
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
[mpe: Make the WARN_ON a WARN_ON_ONCE so it's not retriggerable]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-03-27 10:40:09 +11:00
Jonathan Hunter
9395874219 arm64: tegra: Disable CQE Support for SDMMC4 on Tegra186
Enabling CQE support on Tegra186 Jetson TX2 has introduced a regression
that is causing accesses to the file-system on the eMMC to fail. Errors
such as the following have been observed ...

 mmc2: running CQE recovery
 mmc2: mmc_select_hs400 failed, error -110
 print_req_error: I/O error, dev mmcblk2, sector 8 flags 80700
 mmc2: cqhci: CQE failed to exit halt state

For now disable CQE support for Tegra186 until this issue is resolved.

Fixes: dfd3cb6feb arm64: tegra: Add CQE Support for SDMMC4
Signed-off-by: Jonathan Hunter <jonathanh@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2019-03-25 17:12:20 +01:00
Arnd Bergmann
2e8c54db3b i.MX fixes for 5.1:
- Correct phy mode setting of imx6dl-yapp4 board to fix a problem
    caused by commit 5ecdd77c61 ("net: dsa: qca8k: disable delay
    for RGMII mode").
  - Add a missing of_node_put call to fix leaked reference detected by
    coccinelle in imx51 machine code.
  - Fix imx6q cpuidle driver bug which causes that CPU might not wake up
    at expected time.
  - Increase reset duration of Ethernet phy Micrel KSZ9031RNX to fix
    transmission timeouts error seen on imx6qdl-phytec-pfla02 board.
  - Correct SPDX License Identifier style for imx6ull-pinfunc-snvs.h.
  - Fix 'bus-witdh' typos in imx6qdl-icore-rqs.dtsi.
  - Correct pseudo PHY address of switch device for imx6dl-yapp4 board.
  - Update PWM driver options in imx defconfig files due to the change
    on driver part.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJclKQNAAoJEFBXWFqHsHzOSugIAJGMo/4tEOijA6oysBhzwE3A
 xy7nHp92RxAZEImjE14NRNgyS6zTZd51PWn3CQjjtw+x+6OBsk4kI+ftQvxp1irg
 7ag6uvjZ5lPaW04tF6bUbI9vZd9+Fsy1z7D/hTzsPPj7w7iH+2rMgWsNwma/ZZ9r
 UFmSfkgxE1kj8sHsnm3EoryKLeu69gD1p+chsWwe4/zxeo+yDeOQuXc1fc05HN5Y
 JOPvHk8PWPDNHwhu8XX20aPGGZpjxi75uhwGDbIQnVCp/k4fDZyDxKfNKcZSrFbK
 JsDxGRIRYd+TXM/E/UJ1TdXsmP6pUoyMXVJi3+0nk0QqLnQqjkdTP2O9MRt+Qng=
 =jCPr
 -----END PGP SIGNATURE-----

Merge tag 'imx-fixes-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux into arm/fixes

i.MX fixes for 5.1:
 - Correct phy mode setting of imx6dl-yapp4 board to fix a problem
   caused by commit 5ecdd77c61 ("net: dsa: qca8k: disable delay
   for RGMII mode").
 - Add a missing of_node_put call to fix leaked reference detected by
   coccinelle in imx51 machine code.
 - Fix imx6q cpuidle driver bug which causes that CPU might not wake up
   at expected time.
 - Increase reset duration of Ethernet phy Micrel KSZ9031RNX to fix
   transmission timeouts error seen on imx6qdl-phytec-pfla02 board.
 - Correct SPDX License Identifier style for imx6ull-pinfunc-snvs.h.
 - Fix 'bus-witdh' typos in imx6qdl-icore-rqs.dtsi.
 - Correct pseudo PHY address of switch device for imx6dl-yapp4 board.
 - Update PWM driver options in imx defconfig files due to the change
   on driver part.

* tag 'imx-fixes-5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/shawnguo/linux:
  ARM: imx_v4_v5_defconfig: enable PWM driver
  ARM: imx_v6_v7_defconfig: continue compiling the pwm driver
  ARM: dts: imx6dl-yapp4: Use correct pseudo PHY address for the switch
  ARM: dts: imx6qdl: Fix typo in imx6qdl-icore-rqs.dtsi
  ARM: dts: imx6ull: Use the correct style for SPDX License Identifier
  ARM: dts: pfla02: increase phy reset duration
  ARM: imx6q: cpuidle: fix bug that CPU might not wake up at expected time
  ARM: imx51: fix a leaked reference by adding missing of_node_put
  ARM: dts: imx6dl-yapp4: Use rgmii-id phy mode on the cpu port
2019-03-25 17:06:41 +01:00
Arnd Bergmann
0cee41d4d0 This pull request contains Broadcom ARM/ARM64-based SoCs fixes for 5.1,
please pull the following:
 
 - Eric provides fixes for the bcm2835-pm driver: added missing depends
   on MFD_CORE for the ARM64 definition of ARCH_BCM2835, fixing error
   paths on initialization and fixing the PM_IMAGE_PERI power domain
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEm+Rq3+YGJdiR9yuFh9CWnEQHBwQFAlyRTXMACgkQh9CWnEQH
 BwTKOBAAiN0SyRVtCQmomvrIOMTJilmUs+BZrcPnjlIEcxCdUcXn+qxKKgYdtXbR
 KWkrlZypv3LwTbFE9DiBxIT0f3pmBt0c8oeDdiX3uyhUHAaWBasD3+b1XBx8oyYR
 u3wU0A94tVHBvmXHIaDpbPx07Vq2LZ5U8nOGID4A5YTz12v7uzlSY6spdPZz774w
 vMLA/TrXWTvnx+MIF9zHshq/+CuOG9w3Ne86B2rCpTAAX1MxLyGNAu1dkFD/yFEl
 maBcBQJy3G2wgGCgudw3d+LMGMhwP0FKNdspblQ0yxiayPJDkHm+oOfdrF74gW00
 DldAW4nqelDkJJ6KVMuLVlkcTkUf8VXYQX+EJP0K/V3fvfiDf3pr8wK+yp95GvEc
 8/p/HwLWGQjATxUBgJIRORfTUPOTYbJUX3q6n2Wn8EKuTwWSl+mQGrsrGbv31N8I
 6+Mo2eKhxMxmdsfcN3Ox++m9XXqTlds2wXuSYq77lT0kXspq5HuaMu1PsRPsvDFN
 pbAqaKgf7DPoUQBrGFaJJgjB1bj+gG/7r4rTcNXEHxQcAneo20TuGtwjNq2rxVVY
 8fn9eDbSrh/e7wduXf6hhrV/akn2tq5bLYUSqcWmFCpskITbz4wDbAyMtLt8qZty
 WGZBmnP7e3O3qa9wJvOkC4HKmb878+eNNn8Uvl6k/riu34kUmj8=
 =0kxj
 -----END PGP SIGNATURE-----

Merge tag 'arm-soc/for-5.1/soc-fixes' of https://github.com/Broadcom/stblinux into arm/fixes

This pull request contains Broadcom ARM/ARM64-based SoCs fixes for 5.1,
please pull the following:

- Eric provides fixes for the bcm2835-pm driver: added missing depends
  on MFD_CORE for the ARM64 definition of ARCH_BCM2835, fixing error
  paths on initialization and fixing the PM_IMAGE_PERI power domain

* tag 'arm-soc/for-5.1/soc-fixes' of https://github.com/Broadcom/stblinux:
  arm64: bcm2835: Add missing dependency on MFD_CORE.
  soc: bcm: bcm2835-pm: Fix error paths of initialization.
  soc: bcm: bcm2835-pm: Fix PM_IMAGE_PERI power domain support.
2019-03-25 17:05:30 +01:00
Arnd Bergmann
274a8ddcbc This pull request contains Broadcom ARM-based SoCs Device Tree fixes for
5.1, please pull the following:
 
 - Helen fixes the HDMI hot-pug detect GPIO polarity for the Rasperry Pi
   model B revision 2
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEm+Rq3+YGJdiR9yuFh9CWnEQHBwQFAlyRTfsACgkQh9CWnEQH
 BwQbWBAAohl51B5h7cJGWz2F53tP9fDGcDL8CN1D2gIIid/3IPQtxhC7Z+Gdj0SF
 mlSUmibKF1LeIgYYH0y/q5sSiW1Srew5ukkgYLGIiJ49YRysvgbAn0WYue7QMU8G
 jegJBHy5Gz7JJvue+3KFaoZwDLos+IRI9vAeLMcK3PoQey4lfHS2s0NOkAVLxbcc
 bOLdCNSOLDrw+wFM3MgtNqNQPjCg7X4eTJEg5pKwGFwjFdlGQqSNQ6u/SWKg6w64
 eb3CLxzihIgX7s473HD7reK2Q+yhjj1mSWDC0HWTstJ90suMBeW3yQMf7v/IwRE/
 iRLRlbsD0mCq1JMG8TuYmm3eq18PkFAteF2Hm6fMhdR1lE/QNQM2f/W7itYmMqZC
 /kL/bkvthpoDAkCjTNEBtgOri1N9oJWVgC77asUc1gAKz5SwBpkPQSNZzxy5Tuqz
 WNNd9v3lxnS5tnsqs//Iqgt23KTouD3w4MAGVkI4eVpjh1Kz83H+OrgU1fBxoqKl
 rer8h4yS2sYkHfdZsnMZ/H+GBNqme8tVJufzasr03h60mXIFzNOy8RLaexZTslil
 UP3d5e8fZo7u4BeNh7925R/K5pX+HjcmndXoQPflwoZ3SdulICgT8naDfEpoqpLQ
 UTNuhDvlNY1ec7mnZQ5GwcDUwQ77e6KsNqwq2vMtffIMyGc/4j0=
 =XW7G
 -----END PGP SIGNATURE-----

Merge tag 'arm-soc/for-5.1/devicetree-fixes' of https://github.com/Broadcom/stblinux into arm/fixes

This pull request contains Broadcom ARM-based SoCs Device Tree fixes for
5.1, please pull the following:

- Helen fixes the HDMI hot-pug detect GPIO polarity for the Rasperry Pi
  model B revision 2

* tag 'arm-soc/for-5.1/devicetree-fixes' of https://github.com/Broadcom/stblinux:
  ARM: dts: bcm283x: Fix hdmi hpd gpio pull
2019-03-25 17:04:47 +01:00
Linus Walleij
fa9463564e ARM: dts: nomadik: Fix polarity of SPI CS
The SPI DT bindings are for historical reasons a pitfall,
the ability to flag a GPIO line as active high/low with
the second cell flags was introduced later so the SPI
subsystem will only accept the bool flag spi-cs-high
to indicate that the line is active high.

It worked by mistake, but the mistake was corrected
in another commit.

The comment in the DTS file was also misleading: this
CS is indeed active high.

Fixes: cffbb02daf ("ARM: dts: nomadik: Augment NHK15 panel setting")
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2019-03-25 17:03:34 +01:00
Arnd Bergmann
44cd905041 Renesas ARM Based SoC Fixes for v5.1
R-Car Gen3 E3 (r8a77990) and RZ/G2E (r8a774c0) SoCs:
 * Correct SCIF5 DMA channels
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE4nzZofWswv9L/nKF189kaWo3T74FAlyLegEACgkQ189kaWo3
 T76QFBAAlYeKbR0QsA92hciDmlWMMvp26LPXDlI8Ue+vcxi9Weps1t2H2ZcCl0Hc
 vt5EVp5fnM15gpMwDNMVKZan7NKuxPrIqXYO23oR/YW4GUV/H9gwKteAwHEcXVp3
 7kp9OvYkPYAXNoFxK8kh26qVoeBZxckdRchHdNq/A+3dXjuamA+s04OK62S5N+4r
 eEWPZ446IOJZEswN6DCC3uF+92JLW3TrjMoRkqWm+LrKbvFFFgokx8DD+kf3+pES
 ZZzS9yuY+U/BK0mvRs7GbLNoC83b1TMp2AJDZjf0eA6gs+Nxi+2/AwtNgU7kMRlZ
 ucBc9qhg3K8r9UZQw83sWs6wcAsBLsgv9KJjY34KFJdXxHWXNIB2hvAUV1uA++hq
 Vz+wYa+zs7xW2BlSbaw8awa+o1BP01+hjLdsEJg3WJsMM+ayBOB5J3mZ28YHktlE
 Y6YLD+zxp2GsIA6FaMoJfiocZfm0AutRWyF59OHqhwKHenq+LFL3/J0lNsTEeDTm
 pU1tlNoxglxMEHIatV8YQtSngrPDeTjny0MIb2k94XXbB3ljL+bXph8JlG3hb8od
 X5ekBoLWQY+VrB/6Hy2cHQdJSWaZTf/KYF3QyFO8790MJU8rjQ208oYRiTax5U1i
 K2KIquikCDOiFbRgZy3bR1J9wJUdnrx/EI9qzMyjv+hrcWpxJTM=
 =6Y1e
 -----END PGP SIGNATURE-----

Merge tag 'renesas-fixes-for-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas into arm/fixes

Renesas ARM Based SoC Fixes for v5.1

R-Car Gen3 E3 (r8a77990) and RZ/G2E (r8a774c0) SoCs:
* Correct SCIF5 DMA channels

* tag 'renesas-fixes-for-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas:
  arm64: dts: renesas: r8a774c0: Fix SCIF5 DMA channels
  arm64: dts: renesas: r8a77990: Fix SCIF5 DMA channels
2019-03-25 17:02:31 +01:00
Sekhar Nori
2dbed152e2 ARM: davinci: fix build failure with allnoconfig
allnoconfig build with just ARCH_DAVINCI enabled
fails because drivers/clk/davinci/* depends on
REGMAP being enabled.

Fix it by selecting REGMAP_MMIO when building in
DaVinci support.

Signed-off-by: Sekhar Nori <nsekhar@ti.com>
Reviewed-by: David Lechner <david@lechnology.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2019-03-25 16:59:30 +01:00
Michael Ellerman
d947075739 powerpc/64: Fix memcmp reading past the end of src/dest
Chandan reported that fstests' generic/026 test hit a crash:

  BUG: Unable to handle kernel data access at 0xc00000062ac40000
  Faulting instruction address: 0xc000000000092240
  Oops: Kernel access of bad area, sig: 11 [#1]
  LE SMP NR_CPUS=2048 DEBUG_PAGEALLOC NUMA pSeries
  CPU: 0 PID: 27828 Comm: chacl Not tainted 5.0.0-rc2-next-20190115-00001-g6de6dba64dda #1
  NIP:  c000000000092240 LR: c00000000066a55c CTR: 0000000000000000
  REGS: c00000062c0c3430 TRAP: 0300   Not tainted  (5.0.0-rc2-next-20190115-00001-g6de6dba64dda)
  MSR:  8000000002009033 <SF,VEC,EE,ME,IR,DR,RI,LE>  CR: 44000842  XER: 20000000
  CFAR: 00007fff7f3108ac DAR: c00000062ac40000 DSISR: 40000000 IRQMASK: 0
  GPR00: 0000000000000000 c00000062c0c36c0 c0000000017f4c00 c00000000121a660
  GPR04: c00000062ac3fff9 0000000000000004 0000000000000020 00000000275b19c4
  GPR08: 000000000000000c 46494c4500000000 5347495f41434c5f c0000000026073a0
  GPR12: 0000000000000000 c0000000027a0000 0000000000000000 0000000000000000
  GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
  GPR20: c00000062ea70020 c00000062c0c38d0 0000000000000002 0000000000000002
  GPR24: c00000062ac3ffe8 00000000275b19c4 0000000000000001 c00000062ac30000
  GPR28: c00000062c0c38d0 c00000062ac30050 c00000062ac30058 0000000000000000
  NIP memcmp+0x120/0x690
  LR  xfs_attr3_leaf_lookup_int+0x53c/0x5b0
  Call Trace:
    xfs_attr3_leaf_lookup_int+0x78/0x5b0 (unreliable)
    xfs_da3_node_lookup_int+0x32c/0x5a0
    xfs_attr_node_addname+0x170/0x6b0
    xfs_attr_set+0x2ac/0x340
    __xfs_set_acl+0xf0/0x230
    xfs_set_acl+0xd0/0x160
    set_posix_acl+0xc0/0x130
    posix_acl_xattr_set+0x68/0x110
    __vfs_setxattr+0xa4/0x110
    __vfs_setxattr_noperm+0xac/0x240
    vfs_setxattr+0x128/0x130
    setxattr+0x248/0x600
    path_setxattr+0x108/0x120
    sys_setxattr+0x28/0x40
    system_call+0x5c/0x70
  Instruction dump:
  7d201c28 7d402428 7c295040 38630008 38840008 408201f0 4200ffe8 2c050000
  4182ff6c 20c50008 54c61838 7d201c28 <7d402428> 7d293436 7d4a3436 7c295040

The instruction dump decodes as:
  subfic  r6,r5,8
  rlwinm  r6,r6,3,0,28
  ldbrx   r9,0,r3
  ldbrx   r10,0,r4      <-

Which shows us doing an 8 byte load from c00000062ac3fff9, which
crosses the page boundary at c00000062ac40000 and faults.

It's not OK for memcmp to read past the end of the source or
destination buffers if that would cross a page boundary, because we
don't know that the next page is mapped.

As pointed out by Segher, we can read past the end of the source or
destination as long as we don't cross a 4K boundary, because that's
our minimum page size on all platforms.

The bug is in the code at the .Lcmp_rest_lt8bytes label. When we get
there we know that s1 is 8-byte aligned and we have at least 1 byte to
read, so a single 8-byte load won't read past the end of s1 and cross
a page boundary.

But we have to be more careful with s2. So check if it's within 8
bytes of a 4K boundary and if so go to the byte-by-byte loop.

Fixes: 2d9ee327ad ("powerpc/64: Align bytes before fall back to .Lshort in powerpc64 memcmp()")
Cc: stable@vger.kernel.org # v4.19+
Reported-by: Chandan Rajendra <chandan@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>
Tested-by: Chandan Rajendra <chandan@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-03-25 23:33:26 +11:00