* 'next/fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/linux-arm-soc: (35 commits)
ARM: msm: platsmp: determine number of CPU cores at boot time
ARM: Tegra: Seaboard: Fix I2C bus numbering for ADT7461
ARM: Tegra: Trimslice: Tri-state DAP3 pinmux
ARM: orion5x: fixup 5181 MPP mask check
ARM: mxs-dma: include <linux/dmaengine.h>
ARM: i.MX53: consistently use MX53_UART_PAD_CTRL for uart txd/rxd/rts/cts
ARM: i.MX53: UARTn_CTS pin should not change RTS input select
ARM: i.MX53: UARTn_TXD pin should not change RXD input select
ARM: mx25: Fix typo on CAN1_RX pad setting
iomux-mx53: add missing 'IOMUX_CONFIG_SION' for some I2C pad definitions
ARM: NUC93X: add UL suffix to VMALLOC_END to ensure it is properly typed
ARM: LPC32XXX: add UL suffix to VMALLOC_END to ensure it is properly typed
ARM: CNS3XXX: add UL suffix to VMALLOC_END to ensure it is properly typed
ARM: i.MX53: Fix IOMUX type o's
ARM i.MX dma: Fix burstsize settings
mach-mx5: fix the I2C clock parents
ARM: mxs/tx28: according to the TX28's datasheet D4-D7 are not used for MMC0
ARM i.MX23/28: platform-mxsfb: Add missing include of linux/dma-mapping.h
ARM: mx53: Fix some interrupts marked as reserved.
MXC: iomux-v3: correct NO_PAD_CTRL definition
...
Fix up trivial conflict in arch/arm/mach-imx/mach-mx31_3ds.c
Enable cpu_has_clo_clz only when CONFIG_CPU_MIPS32 or CONFIG_CPU_MIPS64
is selected. This will optimize fls() and __fls() to use CLZ insn, and
eventually ffs() and __ffs() as well.
Malta and MIPSSim are development platforms, and need to take care of
various processor configurations, release rivisions and so on, even
across different MIPS ISAs. For such platforms we have to be careful,
for instance, with turning on cpu_has_mips{32,64}r[12] features.
As for CLZ, all MIPS32/64 processors support it, regardless of release
revisions.
Signed-off-by: Shinya Kuribayashi <skuribay@pobox.com>
To: David VomLehn <dvomlehn@cisco.com>
To: macro@linux-mips.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/1453/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This will optimize fls() and __fls() to use CLZ throughout the kernel,
and any other optimizations that depend on constant cpu_has_* values
will also be used.
Signed-off-by: David VomLehn <dvomlehn@cisco.com>
Signed-off-by: Shinya Kuribayashi <skuribay@pobox.com>
To: David VomLehn <dvomlehn@cisco.com>
To: macro@linux-mips.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/1452/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
fixrange_init() allocates page tables for all addresses higher than
FIXADDR_TOP. On processors that override the default FIXADDR_TOP
address of 0xfffe_0000, this can consume up to 4 pages (1 page per 4MB)
for pgd's that are never used.
Signed-off-by: Kevin Cernekee <cernekee@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/1980/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Memory maps and addressing quirks are normally defined in <spaces.h>.
There are already three targets that need to override FIXADDR_TOP, and
others exist. This will be a cleaner approach than adding lots of
ifdefs in fixmap.h .
Signed-off-by: Kevin Cernekee <cernekee@gmail.com>
Cc: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/1573/
Acked-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On processors with deep write buffers, it is likely that many cycles
will pass between a CACHE instruction and the time the data actually
gets written out to DRAM. Add a SYNC instruction to ensure that the
buffers get emptied before the flush functions return.
Actual problem seen in the wild:
1) dma_alloc_coherent() allocates cached memory
2) memset() is called to clear the new pages
3) dma_cache_wback_inv() is called to flush the zero data out to memory
4) dma_alloc_coherent() returns an uncached (kseg1) pointer to the
freshly allocated pages
5) Caller writes data through the kseg1 pointer
6) Buffered writeback data finally gets flushed out to DRAM
7) Part of caller's data is inexplicably zeroed out
This patch adds SYNC between steps 3 and 4, which fixed the problem.
Signed-off-by: Kevin Cernekee <cernekee@gmail.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork:
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
pfn_valid() compares the PFN to max_mapnr:
__pfn >= min_low_pfn && __pfn < max_mapnr;
On HIGHMEM kernels, highend_pfn is used to set the value of max_mapnr.
Unfortunately, highend_pfn is left at zero if the system does not
actually have enough RAM to reach into the HIGHMEM range. This causes
pfn_valid() to always return false, and when debug checks are enabled
the kernel will fail catastrophically:
Memory: 22432k/32768k available (2249k kernel code, 10336k reserved, 653k data, 1352k init, 0k highmem)
NR_IRQS:128
kfree_debugcheck: out of range ptr 81c02900h.
Kernel bug detected[#1]:
Cpu 0
$ 0 : 00000000 10008400 00000034 00000000
$ 4 : 8003e160 802a0000 8003e160 00000000
$ 8 : 00000000 0000003e 00000747 00000747
...
On such a configuration, max_low_pfn should be used to set max_mapnr.
This was seen on 2.6.34.
Signed-off-by: Kevin Cernekee <cernekee@gmail.com>
To: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/1992/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This patch introduced topdown mmap support in user process address
space allocation policy.
Recently, we ran some large applications that use mmap heavily and
lead to OOM due to inflexible mmap allocation policy on MIPS32.
Since most other major archs supported it for years, it is reasonable
to follow the trend and reduce the pain of porting applications.
Due to cache aliasing concern, arch_get_unmapped_area_topdown() and
other helper functions are implemented in arch/mips/kernel/syscall.c.
Signed-off-by: Jian Peng <jipeng2005@gmail.com>
Cc: David Daney <ddaney@caviumnetworks.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/2389/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The address limit is already set in flush_old_exec() via set_fs(USER_DS)
so this assignment is redundant.
[ralf@linux-mips.org: also see dac853ae89043f1b7752875300faf614de43c74b for
further explanation.]
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/2466/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/davej/cpufreq:
[CPUFREQ] s5pv210: make needlessly global symbols static
[CPUFREQ] exynos4210: make needlessly global symbols static
[CPUFREQ] S3C6410: Add some lower frequencies for 800MHz base clock operation
[CPUFREQ] S5PV210: Add reboot notifier to prevent system hang
[CPUFREQ] S5PV210: Adjust udelay prior to voltage scaling down
[CPUFREQ] S5PV210: Lock a mutex while changing the cpu frequency
[CPUFREQ] S5PV210: Add pm_notifier to prevent system unstable
[CPUFREQ] S5PV210: Add arm/int voltage control support
[CPUFREQ] S5PV210: Add additional symantics for "relation" in cpufreq with pm
[CPUFREQ] S3C64xx: Notify transition complete as soon as frequency changed
[CPUFREQ] S3C6410: Support 800MHz operation in cpufreq
[CPUFREQ] s5pv210-cpufreq.c: Add missing clk_put
[CPUFREQ] Move compile for S3C64XX cpufreq to /drivers/cpufreq
[CPUFREQ] Remove some vi noise that escaped into the Makefile.
[CPUFREQ] Move ARM Samsung cpufreq drivers to drivers/cpufreq/
[CPUFREQ/S3C64xx] Move S3C64xx CPUfreq driver into drivers/cpufreq
[CPUFREQ] Handle CPUs with different capabilities in acpi-cpufreq
commit 2502b667ea835ee16685c74b2a0d89ba8afe117a ("Change the m68knommu irq
handling to use the generic irq framework.") removed the reporting of spurious
interrupts on nommu (68328 and 68360).
Bring it back in a generic way, using "atomic_t irq_err_count", as that's what
most of the other architectures are using.
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
It is not machine-specific, but common irq infrastructure.
Also add the missing asmlinkage, to match its definition.
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
The ColdFire processors have a much more limited set of addressing modes
that can be used for most instructions. A number of the atomic operations
have already been fixed to limit the addressing modes used with add and
sub instructions when building for ColdFire. But we missed a few.
Fix the remaining atomic operations to be clean for ColdFire processors.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
When reworking bitops.h to be clean for all processor types we introduced
a CONFIG_CPU_HAS_NO_BITFIELDS define to signal whether this processor type
supported the bit field instructions. The ARCH_SIG_BITOPS functions for
m68k use these instruction types. We should base the use of these functions
(or the generic versions) on the CONFIG_CPU_HAS_NO_BITFIELDS define.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
The real difference between the mmu and non-mmu varients of the delay.h
files has nothing to do with having an mmu or not. It is processor family
differences that means slightly different code. Merge the delay_mm.h and
delay_no.h files back into a single file.
The primarly difference we need to deal with is whether the processor
supports a 32bit * 32bit -> 64bit multiply. Without it we need to do some
shift scaling as well as use a 32bit * 32bit -> 32bit multiply. If building
for a multi-CPU type kernel then we must use the simpler mult/shift scaling.
This version of delay code allows the CPU32 family to use a 64bit mul,
since it supports this instruction, the old code did not.
The changes use macros where appropriate to try and optimize constant sized
udelay times. And it removes the use of a fixed lib function for the non-mmu
case. Code size on typical kernel configurations is similar, or only larger
by a few tens of bytes.
Also removed the unused muldiv() code from delay_mm.h.
Build and run tested on ColdFire and ARAnyM. Build tested only on 68328
and 68360 (CPU32).
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
Currently trap_init() is an empty function for m68knommu. Instead
the vectors are being setup as part of the IRQ initialization.
This is inconsistent with m68k and other architectures.
Change the local init_vectors() to be trap_init(), and init the
vectors at the correct time during startup. This will help merge of
m68k and m68knommu trap code in the furture.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
The ColdFire 5206 and 5206e CPU families are almost identical, we can
easily merge the platform support code for them. All the differences
are dealt with in the current include/asm/5206sim.h.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
The following patch merges the mmu and non-mmu versions of the m68k
bitops.h files. Now there is a good deal of difference between the two
files, but none of it is actually an mmu specific difference. It is
all about the specific m68k/coldfire varient we are targeting. So it
makes an awful lot of sense to merge these into a single bitops.h.
There is a number of ways I can see to factor this code. The approach
I have taken here is to keep the various versions of each macro/function
type together. This means that there is some ifdefery with each to handle
each CPU type.
I have added some comments in a couple of appropriate places to try
and make it clear what the differences we are dealing with are.
Specifically the instruction and addressing mode differences we have
to deal with.
The merged form keeps the same underlying optimizations for each CPU
type for all the general bit clear/set/change and find bit operations.
It does switch to using the generic le operations though, instead of
any local varients.
Build tested on ColdFire, 68328, 68360 (which is cpu32) and 68020+.
Run tested on ColdFire and ARAnyM.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
The non-MMU m68k targets can use the same asm/system.h as the MMU
targets. So switch the current system_mm.h to be system.h and remove
system_no.h.
The assembly support code for the non-MMU resume functions needs to
be modified to match the now common switch_to() macro. Specifically
this means correctly saving and restoring the status flags in the case
of the ColdFire resume, and some reordering of the code to not use
registers before they are saved or after they are restored.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
The contents of asm/hardirq.h are pretty strait forward for both the
MMU (hardirq_mm.h) and non-MMU (hardirq_no.h) include files. Merge the
two back into a single file.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
The non-mmu and mmu versions of the module loader module.c are
nearly identical. Merge them back to a single module.c. There is
a little bit of re-ordering of the struct and enum definitions in
module.h to keep the ifdefery to a minimum.
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
arch/m68k/mm/init_no.c:123: warning: format "%d" expects type "int", but argument 2 has type "long unsigned int"
And use pr_notice() while we're at it.
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Greg Ungerer <gerg@uclinux.org>
Some recent changes to the way that ACPI handles wakeup flags
means that the XO15EC ACPI device is not wakeup-capable by
default so device_set_wakeup_enable() does nothing.
Use device_init_wakeup() to mark the device as wakeup capable,
and to enable wakeups.
Signed-off-by: Daniel Drake <dsd@laptop.org>
Link: http://lkml.kernel.org/r/20110724173430.BE03C9D401C@zog.reactivated.net
Signed-off-by: Ingo Molnar <mingo@elte.hu>
As reported by Randy Dunlap, CONFIG_POWER_SUPPLY=m caused a
compile error:
arch/x86/built-in.o: In function `battery_status_changed':
olpc-xo15-sci.c:(.text+0x3acdd): undefined reference to `power_supply_get_by_name'
olpc-xo15-sci.c:(.text+0x3ad04): undefined reference to `power_supply_changed'
The SCI drivers, as bool, require POWER_SUPPLY to be builtin.
Use select to make that a hard requirement and avoid this build
failure.
Reported-by: Randy Dunlap <rdunlap@xenotime.net>
Acked-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Daniel Drake <dsd@laptop.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (237 commits)
ARM: 7004/1: fix traps.h compile warnings
ARM: 6998/2: kernel: use proper memory barriers for bitops
ARM: 6997/1: ep93xx: increase NR_BANKS to 16 for support of 128MB RAM
ARM: Fix build errors caused by adding generic macros
ARM: CPU hotplug: ensure we migrate all IRQs off a downed CPU
ARM: CPU hotplug: pass in proper affinity mask on IRQ migration
ARM: GIC: avoid routing interrupts to offline CPUs
ARM: CPU hotplug: fix abuse of irqdesc->node
ARM: 6981/2: mmci: adjust calculation of f_min
ARM: 7000/1: LPAE: Use long long printk format for displaying the pud
ARM: 6999/1: head, zImage: Always Enter the kernel in ARM state
ARM: btc: avoid invalidating the branch target cache on kernel TLB maintanence
ARM: ARM_DMA_ZONE_SIZE is no more
ARM: mach-shark: move ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-sa1100: move ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-realview: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-pxa: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-ixp4xx: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-h720x: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
ARM: mach-davinci: move from ARM_DMA_ZONE_SIZE to mdesc->dma_zone_size
...
This patch removes all the module loader hook implementations in the
architecture specific code where the functionality is the same as that
now provided by the recently added default hooks.
Signed-off-by: Jonas Bonn <jonas@southpole.se>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Michal Simek <monstr@monstr.eu>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The idea is from Avi:
| We could cache the result of a miss in an spte by using a reserved bit, and
| checking the page fault error code (or seeing if we get an ept violation or
| ept misconfiguration), so if we get repeated mmio on a page, we don't need to
| search the slot list/tree.
| (https://lkml.org/lkml/2011/2/22/221)
When the page fault is caused by mmio, we cache the info in the shadow page
table, and also set the reserved bits in the shadow page table, so if the mmio
is caused again, we can quickly identify it and emulate it directly
Searching mmio gfn in memslots is heavy since we need to walk all memeslots, it
can be reduced by this feature, and also avoid walking guest page table for
soft mmu.
[jan: fix operator precedence issue]
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Use rcu to protect shadow pages table to be freed, so we can safely walk it,
it should run fastly and is needed by mmio page fault
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Now, the spte is just from nonprsent to present or present to nonprsent, so
we can use some trick to set/clear spte non-atomicly as linux kernel does
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Introduce some interfaces to modify spte as linux kernel does:
- mmu_spte_clear_track_bits, it set the spte from present to nonpresent, and
track the stat bits(accessed/dirty) of spte
- mmu_spte_clear_no_track, the same as mmu_spte_clear_track_bits except
tracking the stat bits
- mmu_spte_set, set spte from nonpresent to present
- mmu_spte_update, only update the stat bits
Now, it does not allowed to set spte from present to present, later, we can
drop the atomicly opration for X86_32 host, and it is the preparing work to
get spte on X86_32 host out of the mmu lock
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
Introduce handle_abnormal_pfn to handle fault pfn on page fault path,
introduce mmu_invalid_pfn to handle fault pfn on prefetch path
It is the preparing work for mmio page fault support
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
If the page fault is caused by mmio, the gfn can not be found in memslots, and
'bad_pfn' is returned on gfn_to_hva path, so we can use 'bad_pfn' to identify
the mmio page fault.
And, to clarify the meaning of mmio pfn, we return fault page instead of bad
page when the gfn is not allowd to prefetch
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
The idea is from Avi:
| Maybe it's time to kill off bypass_guest_pf=1. It's not as effective as
| it used to be, since unsync pages always use shadow_trap_nonpresent_pte,
| and since we convert between the two nonpresent_ptes during sync and unsync.
Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>