1095 Commits

Author SHA1 Message Date
Dave Martin
641d8233a6 ARM: mm: cache-v6: Use the new processor struct macros
Signed-off-by: Dave Martin <dave.martin@linaro.org>
2011-07-07 15:31:06 +01:00
Dave Martin
d5b5b2e2f8 ARM: mm: cache-v4wt: Use the new processor struct macros
Signed-off-by: Dave Martin <dave.martin@linaro.org>
2011-07-07 15:31:06 +01:00
Dave Martin
eec95e56e6 ARM: mm: cache-v4wb: Use the new processor struct macros
Signed-off-by: Dave Martin <dave.martin@linaro.org>
2011-07-07 15:31:06 +01:00
Dave Martin
54d4e9ebbc ARM: mm: cache-v4: Use the new processor struct macros
Signed-off-by: Dave Martin <dave.martin@linaro.org>
2011-07-07 15:31:06 +01:00
Dave Martin
9c373968d6 ARM: mm: cache-v3: Use the new processor struct macros
Signed-off-by: Dave Martin <dave.martin@linaro.org>
2011-07-07 15:31:05 +01:00
Dave Martin
9bc7491341 ARM: mm: cache-fa: Use the new processor struct macros
Signed-off-by: Dave Martin <dave.martin@linaro.org>
2011-07-07 15:31:05 +01:00
Dave Martin
66a625a881 ARM: mm: proc-macros: Add generic proc/cache/tlb struct definition macros
This patch adds some generic macros to reduce boilerplate when
declaring certain common structures in arch/arm/mm/*.S

Thanks to Russell King for outlining what the
define_processor_functions macro could look like.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
2011-07-07 15:30:35 +01:00
Will Deacon
38a8914f9a ARM: 6987/1: l2x0: fix disabling function to avoid deadlock
The l2x0_disable function attempts to writel with the l2x0_lock held.
This results in deadlock when the writel contains an outer_sync call
for the platform since the l2x0_lock is already held by the disable
function. A further problem is that disabling the L2 without flushing it
first can lead to the spin_lock operation becoming visible after the
spin_unlock, causing any subsequent L2 maintenance to deadlock.

This patch replaces the writel with a call to writel_relaxed in the
disabling code and adds a flush before disabling in the control
register, preventing livelock from occurring.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-06 20:48:08 +01:00
Russell King
0371d3f7e8 ARM: move memory layout sanity checking before meminfo initialization
Ensure that the meminfo array is sanity checked before we pass the
memory to memblock.  This helps to ensure that memblock and meminfo
agree on the dimensions of memory, especially when more memory is
passed than the kernel can deal with.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-05 20:27:16 +01:00
Russell King
40f0b90a2f ARM: entry: data abort: ensure r5 is preserved by abort functions
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:12 +01:00
Russell King
108f6af0a8 ARM: entry: data abort: always use r6 for offset
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:12 +01:00
Russell King
e22c12f914 ARM: entry: data abort: use r2 as base of pt_regs rather than stack
Now that we pass r2 into these helper functions as the pointer to
pt_regs, use r2 as the base of the registers on the stack rather
than using the stack pointer directly.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:11 +01:00
Russell King
da74047257 ARM: entry: data abort: tail-call the main data abort handler
Tail-call the main C data abort handler code from the per-CPU helper
code.  Update the comments in the code wrt the new calling and return
register state.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:11 +01:00
Russell King
0d147db0c1 ARM: entry: data abort: avoid using r2 in abort helpers
This allows us to pass the pt_regs pointer in to these functions
ready for tail-calling the abort handler.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:11 +01:00
Russell King
3e287bec6f ARM: entry: data abort: arrange for CPU abort helpers to take pc/psr in r4/r5
Re-jig the CPU abort helpers to take the PC/PSR in r4/r5 rather
than r2/r3.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:11 +01:00
Russell King
8dfe7ac96f ARM: entry: prefetch abort: tail-call the main prefetch abort handler
Tail-call the main C prefetch abort handler code from the per-CPU
helper code.  Also note that the helper function becomes ABI
compliant in terms of the registers preserved.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:10 +01:00
Russell King
02fe2845d6 ARM: entry: avoid enabling interrupts in prefetch/data abort handlers
Avoid enabling interrupts if the parent context had interrupts enabled
in the abort handler assembly code, and move this into the breakpoint/
page/alignment fault handlers instead.

This gets rid of some special-casing for the breakpoint fault handlers
from the low level abort handler path.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-02 10:56:00 +01:00
Peter Zijlstra
a8b0ca17b8 perf: Remove the nmi parameter from the swevent and overflow interface
The nmi parameter indicated if we could do wakeups from the current
context, if not, we would set some state and self-IPI and let the
resulting interrupt do the wakeup.

For the various event classes:

  - hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
    the PMI-tail (ARM etc.)
  - tracepoint: nmi=0; since tracepoint could be from NMI context.
  - software: nmi=[0,1]; some, like the schedule thing cannot
    perform wakeups, and hence need 0.

As one can see, there is very little nmi=1 usage, and the down-side of
not using it is that on some platforms some software events can have a
jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).

The up-side however is that we can remove the nmi parameter and save a
bunch of conditionals in fast paths.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Michael Cree <mcree@orcon.net.nz>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
Cc: Anton Blanchard <anton@samba.org>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: Don Zickus <dzickus@redhat.com>
Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-01 11:06:35 +02:00
Russell King
8b4186160b ARM: entry: prefetch abort helper: pass aborted pc in r4 rather than r0
This avoids unnecessary instructions for CPUs which implement the IFAR
(instruction fault address register).

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-30 11:04:59 +01:00
Russell King
198a0a927a ARM: entry: abort-macro: simplify do_ldrd_abort
We can test bits 27:25 and 20 of the instruction at the same time;
there's no need to separate out the check of bit 20.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29 10:06:37 +01:00
Russell King
be020f8618 ARM: entry: abort-macro: specify registers to be used for macros
Require all callers of abort macros to specify the registers to be
used.  This improves the documentation at the callsites as to which
registers are being used by this assembly code.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-29 10:06:36 +01:00
Russell King
b69874e4f5 ARM: pm: arrange for cpu_proc_init() to be called on resume
cpu_proc_init() does processor specific initialization, which we do
at boot time.  We have been omitting to do this on resume, which
causes some of this initialization to be skipped.  We've also been
skipping this on SMP initialization too.

Ensure that cpu_proc_init() is always called appropriately by
moving it into cpu_init(), and move cpu_init() to a more appropriate
point in the boot initialization.

Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:47:12 +01:00
Russell King
111b20d013 ARM: pm: ensure ARMv7 CPUs save and restore the TLS register
Ensure that the TLS register is saved and restored over a suspend
cycle, so that userspace programs don't see a corrupted TLS value.

Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:47:09 +01:00
Russell King
7a0ee92b4a ARM: pm: proc-v7: fix missing struct processor pointers for suspend code
Add the missing suspend/resume pointers for the suspend code.  This
is needed when building for multiple CPUs.

Tested-by: Kevin Hilman <khilman@ti.com>
Acked-by: Jean Pihet <j-pihet@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-24 08:47:01 +01:00
John Linn
b85a3ef4ac ARM: Xilinx: Adding Xilinx board support
The 1st board support is minimal to get a system up and running
on the Xilinx platform.

This platform reuses the clock implementation from plat-versatile, and
it depends entirely on CONFIG_OF support.  There is only one board
support file which obtains all device information from a device tree
dtb file which is passed to the kernel at boot time.

Signed-off-by: John Linn <john.linn@xilinx.com>
2011-06-20 11:52:30 -06:00
Russell King
8f4b8c7613 ARM: initrd: disable initrds outside of memory
We can't cope with initrds outside of memory, so check that the
initrd is within some declared memory to the kernel before using
it.  Otherwise we're likely to OOPS during boot.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-11 00:43:21 +01:00
Russell King
a0a54d37b4 Revert "ARM: 6944/1: mm: allow ASID 0 to be allocated to tasks"
This reverts commit 45b95235b0ac86cef2ad4480b0618b8778847479.

Will Deacon reports that:

 In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID")
 I updated the ASID rollover code to use only the kernel page tables
 whilst updating the ASID.

 Unfortunately, the code to restore the user page tables was part of a
 later patch which isn't yet in mainline, so this leaves the code
 quite broken.

We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW
from ARM, so lets revert these until we can properly sort out what we're
doing with the context switching.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-09 10:13:16 +01:00
Russell King
07989b7ad6 Revert "ARM: 6943/1: mm: use TTBR1 instead of reserved context ID"
This reverts commit 52af9c6cd863fe37d1103035ec7ee22ac1296458.

Will Deacon reports that:

 In 52af9c6c ("ARM: 6943/1: mm: use TTBR1 instead of reserved context ID")
 I updated the ASID rollover code to use only the kernel page tables
 whilst updating the ASID.

 Unfortunately, the code to restore the user page tables was part of a
 later patch which isn't yet in mainline, so this leaves the code
 quite broken.

We're also in the process of eliminating __ARCH_WANT_INTERRUPTS_ON_CTXSW
from ARM, so lets revert these until we can properly sort out what we're
doing with the ARM context switching.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-09 10:13:04 +01:00
Rabin Vincent
45f6d7e0e6 ARM: 6951/1: include .bss in memory layout information
The "Virtual memory kernel layout" message at startup already prints
.text and .data.  Print .bss too.

Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-06 10:56:10 +01:00
Ben Hutchings
9a819d8ac8 ARM: 6948/1: Fix .size directives for __arm{7,9}tdmi_proc_info
gas used to accept (and ignore?) .size directives which referred to
undefined symbols, as these do.  In binutils 2.21 these are treated
as fatal errors.

The issue in proc-arm7tdmi.S was also fixed independently by Peter
Chubb.

Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-06-06 10:56:10 +01:00
Russell King
239df0fd5e Merge branches 'devel', 'devel-stable' and 'fixes' into for-linus 2011-05-27 22:59:57 +01:00
Russell King
cc780af5ac ARM: kill pmd_off()
pmd_off() has only one user, so lets consolidate this into its only
user.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 19:50:30 +01:00
Will Deacon
45b95235b0 ARM: 6944/1: mm: allow ASID 0 to be allocated to tasks
Now that ASID 0 is no longer used as a reserved value, allow it to be
allocated to tasks.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 12:14:33 +01:00
Will Deacon
52af9c6cd8 ARM: 6943/1: mm: use TTBR1 instead of reserved context ID
On ARMv7 CPUs that cache first level page table entries (like the
Cortex-A15), using a reserved ASID while changing the TTBR or flushing
the TLB is unsafe.

This is because the CPU may cache the first level entry as the result of
a speculative memory access while the reserved ASID is assigned. After
the process owning the page tables dies, the memory will be reallocated
and may be written with junk values which can be interpreted as global,
valid PTEs by the processor. This will result in the TLB being populated
with bogus global entries.

This patch avoids the use of a reserved context ID in the v7 switch_mm
and ASID rollover code by temporarily using the swapper_pg_dir pointed
at by TTBR1, which contains only global entries that are not tagged
with ASIDs.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 12:14:33 +01:00
Catalin Marinas
d427958a46 ARM: 6942/1: mm: make TTBR1 always point to swapper_pg_dir on ARMv6/7
This patch makes TTBR1 point to swapper_pg_dir so that global, kernel
mappings can be used exclusively on v6 and v7 cores where they are
needed.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 12:14:32 +01:00
Will Deacon
a248b13b21 ARM: 6941/1: cache: ensure MVA is cacheline aligned in flush_kern_dcache_area
The v6 and v7 implementations of flush_kern_dcache_area do not align
the passed MVA to the size of a cacheline in the data cache. If a
misaligned address is used, only a subset of the requested area will
be flushed. This has been observed to cause failures in SMP boot where
the secondary_data initialised by the primary CPU is not cacheline
aligned, causing the secondary CPUs to read incorrect values for their
pgd and stack pointers.

This patch ensures that the base address is cacheline aligned before
flushing the d-cache.

Cc: <stable@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 12:14:32 +01:00
Will Deacon
40f7bfe4f1 ARM: 6914/1: sparsemem: fix highmem detection when using SPARSEMEM
sanity_check_meminfo walks over the registered memory banks and attempts
to split banks across lowmem and highmem when they would otherwise
overlap with the vmalloc space.

When SPARSEMEM is used, there are two potential problems that occur
when the virtual address of the start of a bank is equal to vmalloc_min.

 1.) The end of lowmem is calculated as __pa(vmalloc_min - 1) + 1.
     In the above scenario, this will give the end address of the
     previous bank, rather than the actual bank we are interested in.
     This value is later used as the memblock limit and artificially
     restricts the total amount of available memory.

 2.) The checks to determine whether or not a bank belongs to highmem
     or not only check if __va(bank->start) is greater or less than
     vmalloc_min. In the case that it is equal, the bank is incorrectly
     treated as lowmem, which hoses the vmalloc area.

This patch fixes these two problems by checking whether the virtual
start address of a bank is >= vmalloc_min and then calculating
lowmem_end by finding the virtual end address of the highest lowmem
bank.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 10:23:25 +01:00
Will Deacon
7b7bf499f7 ARM: 6913/1: sparsemem: allow pfn_valid to be overridden when using SPARSEMEM
In commit eb33575c ("[ARM] Double check memmap is actually valid with a
memmap has unexpected holes V2"), a new function, memmap_valid_within,
was introduced to mmzone.h so that holes in the memmap which pass
pfn_valid in SPARSEMEM configurations can be detected and avoided.

The fix to this problem checks that the pfn <-> page linkages are
correct by calculating the page for the pfn and then checking that
page_to_pfn on that page returns the original pfn. Unfortunately, in
SPARSEMEM configurations, this results in reading from the page flags to
determine the correct section. Since the memmap here has been freed,
junk is read from memory and the check is no longer robust.

In the best case, reading from /proc/pagetypeinfo will give you the
wrong answer. In the worst case, you get SEGVs, Kernel OOPses and hung
CPUs. Furthermore, ioremap implementations that use pfn_valid to
disallow the remapping of normal memory will break.

This patch allows architectures to provide their own pfn_valid function
instead of using the default implementation used by sparsemem. The
architecture-specific version is aware of the memmap state and will
return false when passed a pfn for a freed page within a valid section.

Acked-by: Mel Gorman <mgorman@suse.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-26 10:23:24 +01:00
Peter Zijlstra
1c39517696 mm: now that all old mmu_gather code is gone, remove the storage
Fold all the mmu_gather rework patches into one for submission

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Reported-by: Hugh Dickins <hughd@google.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Miller <davem@davemloft.net>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Tony Luck <tony.luck@intel.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Namhyung Kim <namhyung@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-05-25 08:39:16 -07:00
David Rientjes
7bf02ea22c arch, mm: filter disallowed nodes from arch specific show_mem functions
Architectures that implement their own show_mem() function did not pass
the filter argument to show_free_areas() to appropriately avoid emitting
the state of nodes that are disallowed in the current context.  This patch
now passes the filter argument to show_free_areas() so those nodes are now
avoided.

This patch also removes the show_free_areas() wrapper around
__show_free_areas() and converts existing callers to pass an empty filter.

ia64 emits additional information for each node, so skip_free_areas_zone()
must be made global to filter disallowed nodes and it is converted to use
a nid argument rather than a zone for this use case.

Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Helge Deller <deller@gmx.de>
Cc: James Bottomley <jejb@parisc-linux.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-05-25 08:39:03 -07:00
Russell King
03eb14199e Merge branch 'devicetree/arm-next' of git://git.secretlab.ca/git/linux-2.6 into devel-stable 2011-05-25 00:08:17 +01:00
Russell King
9a55d9752d Merge branch 'devel-stable' into for-linus
Conflicts:
	arch/arm/Kconfig
	arch/arm/mach-ns9xxx/include/mach/uncompress.h
2011-05-23 19:28:04 +01:00
Russell King
ec19628d72 Merge branches 'consolidate', 'ep93xx', 'fixes', 'misc', 'mmci', 'remove' and 'spear' into for-linus 2011-05-23 19:27:40 +01:00
Russell King
4b60e5f90d Merge branches 'consolidate-clksrc', 'consolidate-flash', 'consolidate-generic', 'consolidate-smp', 'consolidate-stmp' and 'consolidate-zones' into consolidate 2011-05-23 18:05:10 +01:00
Grant Likely
93c02ab40a arm/dt: probe for platforms via the device tree
If a dtb is passed to the kernel then the kernel needs to iterate
through compiled-in mdescs looking for one that matches and move the
dtb data to a safe location before it gets accidentally overwritten by
the kernel.

This patch creates a new function, setup_machine_fdt() which is
analogous to the setup_machine_atags() created in the previous patch.
It does all the early setup needed to use a device tree machine
description.

v5: - Print warning with neither dtb nor atags are passed to the kernel
    - Fix bug in setting of __machine_arch_type to the selected machine,
      not just the last machine in the list.
      Reported-by: Tixy <tixy@yxit.co.uk>
    - Copy command line directly into boot_command_line instead of cmd_line
v4: - Dump some output when a matching machine_desc cannot be found
v3: - Added processing of reserved list.
    - Backed out the v2 change that copied instead of reserved the
      dtb.  dtb is reserved again and the real problem was fixed by
      using alloc_bootmem_align() for early allocation of RAM for
      unflattening the tree.
    - Moved cmd_line and initrd changes to earlier patch to make series
      bisectable.
v2: Changed to save the dtb by copying into an allocated buffer.
    - Since the dtb will very likely be passed in the first 16k of ram
      where the interrupt vectors live, memblock_reserve() is
      insufficient to protect the dtb data.

[based on work originally written by Jeremy Kerr <jeremy.kerr@canonical.com>]
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
2011-05-23 09:30:20 -06:00
saeed bishara
31bee4cf0e ARM: 6899/1: fix the note about dcache lazy flushing for SMP systems
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-20 22:39:17 +01:00
saeed bishara
8373dc38ca ARM: 6901/1: remove unneeded check of the cache_is_vipt_nonaliasing()
when cache_is_vipt_nonaliasing(), we always have pte_exec() true at
the end of this function, so no need for the additional check.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Saeed Bishara <saeed@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-16 15:42:42 +01:00
Will Deacon
9af386c8dc ARM: 6890/1: memmap: only free allocated memmap entries when using SPARSEMEM
The SPARSEMEM code allocates memmap entries only for sections which are
present (i.e. those which contain some valid memory). The membank checks
in free_unused_memmap do not take this into account and can incorrectly
attempt to free memory which is not allocated, resulting in a BUG() in
the bootmem code.

However, if memory is configured as follows:

    |<----section---->|<----hole---->|<----section---->|
    +--------+--------+--------------+--------+--------+
    | bank 0 | unused |              | bank 1 | unused |
    +--------+--------+--------------+--------+--------+

where a bank only occupies part of a section, the memmap allocated for
the remainder of the section *can* be freed.

This patch modifies the checks in free_unused_memmap so that only valid
memmap entries are considered for removal.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-12 10:52:00 +01:00
Russell King
be20902ba6 ARM: use ARM_DMA_ZONE_SIZE to adjust the zone sizes
Rather than each platform providing its own function to adjust the
zone sizes, use the new ARM_DMA_ZONE_SIZE definition to perform this
adjustment.  This ensures that the actual DMA zone size and the
ISA_DMA_THRESHOLD/MAX_DMA_ADDRESS definitions are consistent with
each other, and moves this complexity out of the platform code.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-05-12 08:36:53 +01:00
Grant Likely
9eb8f6743b arm/dt: Allow CONFIG_OF on ARM
Add some basic empty infrastructure for DT support on ARM.

v5: - Fix off-by-one error in size calculation of initrd
    - Stop mucking with cmd_line, and load command line from dt into
      boot_command_line instead which matches the behaviour of ATAGS booting
v3: - moved cmd_line export and initrd setup to this patch to make the
      series bisectable.
    - switched to alloc_bootmem_align() for allocation when
      unflattening the device tree.  memblock_alloc() was not the
      right interface.

Signed-off-by: Jeremy Kerr <jeremy.kerr@canonical.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
2011-05-11 15:14:29 +02:00