Currently the linux.bin target creates both linux.bin and linux.bin.ub.
Add linux.bin.ub as separate target to generate linux.bin.ub.
Signed-off-by: Jason Wu <huanyu@xilinx.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
The main reason that this driver can be used by ARM
and PPC. The part of preparing of move to generic location.
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Simplify timer initialization and prepare the driver
for moving to drivers/clocksource folder.
Also remove system-timer property from binding because
the name is too generic.
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
TCR.TBI0 can be used to cause hardware address translation to ignore the
top byte of userspace virtual addresses. Whilst not especially useful in
standard C programs, this can be used by JITs to `tag' pointers with
various pieces of metadata.
This patch enables this bit for AArch64 Linux, and adds a new file to
Documentation/arm64/ which describes some potential caveats when using
tagged virtual addresses.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This was experimental feature which has never been
widely used because it expects GCC behaviour.
Also remove INTC_BASE and TIMER_BASE macros.
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
This patch converts the pci_load_of_ranges function to use the new common
of_pci_range_parser.
Signed-off-by: Andrew Murray <amurray@embedded-bits.co.uk>
Signed-off-by: Andrew Murray <Andrew.Murray@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Instead of taking the spinlock, the lockless versions atomically check
that the lock is not taken, and do the reference count update using a
cmpxchg() loop. This is semantically identical to doing the reference
count update protected by the lock, but avoids the "wait for lock"
contention that you get when accesses to the reference count are
contended.
Note that a "lockref" is absolutely _not_ equivalent to an atomic_t.
Even when the lockref reference counts are updated atomically with
cmpxchg, the fact that they also verify the state of the spinlock means
that the lockless updates can never happen while somebody else holds the
spinlock.
So while "lockref_put_or_lock()" looks a lot like just another name for
"atomic_dec_and_lock()", and both optimize to lockless updates, they are
fundamentally different: the decrement done by atomic_dec_and_lock() is
truly independent of any lock (as long as it doesn't decrement to zero),
so a locked region can still see the count change.
The lockref structure, in contrast, really is a *locked* reference
count. If you hold the spinlock, the reference count will be stable and
you can modify the reference count without using atomics, because even
the lockless updates will see and respect the state of the lock.
In order to enable the cmpxchg lockless code, the architecture needs to
do three things:
(1) Make sure that the "arch_spinlock_t" and an "unsigned int" can fit
in an aligned u64, and have a "cmpxchg()" implementation that works
on such a u64 data type.
(2) define a helper function to test for a spinlock being unlocked
("arch_spin_value_unlocked()")
(3) select the "ARCH_USE_CMPXCHG_LOCKREF" config variable in its
Kconfig file.
This enables it for x86-64 (but not 32-bit, we'd need to make sure
cmpxchg() turns into the proper cmpxchg8b in order to enable it for
32-bit mode).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Pull x86 boot fix from Peter Anvin:
"A single very small boot fix for very large memory systems (> 0.5T)"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/mm: Fix boot crash with DEBUG_PAGE_ALLOC=y and more than 512G RAM
Support UART0 debug ll on Hisilicon Hi3620 SoC & Hi3716 SoC.
Signed-off-by: Haojian Zhuang <haojian.zhuang@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Now that we support a timer-backed delay loop, I'm quickly getting sick
and tired of people complaining that their beloved bogomips value has
decreased. You know who you are!
This patch removes the bogomips line from /proc/cpuinfo, based on the
reasoning that any program parsing this is already broken and, as such,
won't be further broken if the field is removed.
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
It appears that gcc may put some code in ".text.unlikely" or
".text.hot" sections. Right now those aren't accounted for in unwind
tables. Add them.
I found some docs about this at:
http://gcc.gnu.org/onlinedocs/gcc-4.6.2/gcc.pdf
Without this, if you have slub_debug turned on, you can get messages
that look like this:
unwind: Index not found 7f008c50
Signed-off-by: Doug Anderson <dianders@chromium.org>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The newly introduced function is to be used as .restart callback for
ARMv7-M machines. The used register is architecturally defined, so it
should work for all M-class machines.
Acked-by: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Section entries are 2MB on LPAE, so the DEBUG_LL virtual address must
have the same offset in the 2MB section as the physical address. This
fixes async external aborts when DEBUG_LL is enabled on Midway.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
On Cortex-A15 CPUs up to and including r0p4, in certain rare sequences
of code, the loop buffer may deliver incorrect instructions. This
workaround disables the loop buffer to avoid the erratum.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Compared to old atom, Silvermont has offcore and has more events
that support PEBS.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Reviewed-by: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1374138144-17278-2-git-send-email-zheng.z.yan@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Silvermont (22nm Atom) has two offcore response configuration MSRs,
unlike other Intel CPU, its event code for MSR_OFFCORE_RSP_1 is 0x02b7.
To avoid complicating intel_fixup_er(), use INTEL_UEVENT_EXTRA_REG to
define MSR_OFFCORE_RSP_X. So intel_fixup_er() can find the event code
for OFFCORE_RSP_N by x86_pmu.extra_regs[N].event.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1374138144-17278-1-git-send-email-zheng.z.yan@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
For performance reasons, when SMAP is in use, SMAP is left open for an
entire put_user_try { ... } put_user_catch(); block, however, calling
__put_user() in the middle of that block will close SMAP as the
STAC..CLAC constructs intentionally do not nest.
Furthermore, using __put_user() rather than put_user_ex() here is bad
for performance.
Thus, introduce new [compat_]save_altstack_ex() helpers that replace
__[compat_]save_altstack() for x86, being currently the only
architecture which supports put_user_try { ... } put_user_catch().
Reported-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: <stable@vger.kernel.org> # v3.8+
Link: http://lkml.kernel.org/n/tip-es5p6y64if71k8p5u08agv9n@git.kernel.org
Add SMAP annotations to csum_partial_copy_to/from_user(). These
functions legitimately access user space and thus need to set the AC
flag.
TODO: add explicit checks that the side with the kernel space pointer
really points into kernel space.
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/n/tip-2aps0u00eer658fd5xyanan7@git.kernel.org
Cc: <stable@vger.kernel.org> # v3.7+
Two straggling fixes that I had missed as they were posted a couple of
weeks ago, causing problems with interrupts (breaking them completely)
on the CSR SiRF platforms.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJSH/KHAAoJEIwa5zzehBx3IEUQAIby2mOq5BGo0pss8Hv2yvBJ
Q71zTfPB9ag1fgPI1Tmz8T0zWxt3Zl7ynXYGUo43tMaOgCVZUgK5UKOT255DiF8y
7N7+RJCf3obHDh/3RfBZL3fu75yC8mkQHu67/fRnVVun59MhqsAmxWJLZkoeC0O2
8AjIULFCN+OaLkqQx75Ti0PV5KgQIW559sx1JLnDaPq0siS7FMOIpMGxQFQvXuLc
JFNWKazUSzHGZyAuXRMRs7+dzzuVbBaPuecLea2GlFqpRsUsEMUdsvWWhYwCZhRp
UZ+dP88D9d7XZonjn/KIlEn03X1NglsSg0yf+7Ad11cOHqAHHeZh1xHSJTLFUySR
XslNsLy5nifaxphhZIfkYgem+VMY4xYLQIY8ETBSfNhZnplLMLYxLLkTUEbvXPS5
y50eSgBFnnpBktk8qaCQ0R1/sPKNufHYBkdWbBXxUCn8pDAFJnrBAFjJAZpAqZJu
9TXOEApGcH+yQdQ+V5yKc6ln8mJUnXKLR6IHoa9z+LosZEqf9uQCcSrkE1Ml7or6
mwNfvph4ka4/hWNxlHvUyTNZbtzwLkNzd13YHmo4c5zRmigLiW/ldf/4fszdxCZN
KcrUvqfDey90Gg5rKktqrfv4hXPKWGYE9cMKv4eszQT1j06I6w7NkfXg4GlIDs0B
kSI5NUFNVFF30i56+SPM
=dbsG
-----END PGP SIGNATURE-----
Merge tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Olof Johansson:
"Two straggling fixes that I had missed as they were posted a couple of
weeks ago, causing problems with interrupts (breaking them completely)
on the CSR SiRF platforms"
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
arm: prima2: drop nr_irqs in mach as we moved to linear irqdomain
irqchip: sirf: move from legacy mode to linear irqdomain
The panic strings are hard to read and on narrow terminals some
characters are simply truncated off the panic message.
Make is slightly prettier with a newline in the Hyp panic strings.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Compilers before 4.6 do not behave well with unnamed fields in structure
initializers and therefore produces build errors:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=10676
By refering to the unnamed union using braces, both older and newer
compilers produce the same result.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reported-by: Russell King <linux@arm.linux.org.uk>
Tested-by: Russell King <linux@arm.linux.org.uk>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
The tracepoint for kvm_guest_fault was extremely long, make it a
slightly bit shorter.
Cc: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
THe kvm_set_pte function was actually assigning the entire struct to the
structure member, which should work because the structure only has that
one member, but it is still not very nice.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
We always use a timer-backed delay loop for arm64, so don't bother
reporting a bogomips value which appears to confuse some people.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This tile-specific API had a minor bug, in that if a super huge (>4GB)
page mapped a particular address range, we wouldn't handle it correctly.
As part of fixing that bug, I also cleaned up some of the pud and pmd
accessors to make them more consistent.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Previously, we used a special-purpose register (SPR_SYSTEM_SAVE_K_0)
to hold the CPU number and the top of the current kernel stack
by using the low bits to hold the CPU number, and using the high
bits to hold the address of the page just above where we'd want
the kernel stack to be. That way we could initialize a new SP
when first entering the kernel by just masking the SPR value and
subtracting a couple of words.
However, it's actually more useful to be able to place an arbitrary
kernel-top value in the SPR. This allows us to create a new stack
context (e.g. for virtualization) with an arbitrary top-of-stack VA.
To make this work, we now store the CPU number in the high bits,
above the highest legal VA bit (42 bits in the current tilegx
microarchitecture). The full 42 bits are thus available to store the
top of stack value. Getting the current cpu (a relatively common
operation) is still fast; it's now a shift rather than a mask.
We make this change only for tilegx, since tilepro has too few SPR
bits to do this, and we don't need this support on tilepro anyway.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
We use the validate_current() API to make sure that "current" seems
plausible before using it. With the new show_regs_print_info()
API, we want to check that current is OK before calling it, since
otherwise we will end up in a recursive panic.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Normally the build doesn't include these warnings, but at one
point I built with -Wsign-compare, and noticed a few things that
are technically bugs. This change fixes those things.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
With this change such sections are grouped with regular text
in the vmlinux image; this change puts them at the front,
which is where the standard Linux includes .text.hot*.
This change should fix a recently-observed bug where a bunch of
symbols were being omitted from the /proc/kallsyms output
because they fell between _etext and _sinittext.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
In strncpy_from_user_asm, when the destination buffer length was the
same as the actual string length, we were returning the size of the
destination buffer. But since it's a NUL terminated string, we should
return the length of the string instead.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Nothing in the codebase was using them, and as written they took
"unsigned long" as the physical address rather than "phys_addr_t",
which is wrong on tilepro anyway. Rather than fixing stale APIs,
just remove them; if there's ever demand for them on this platform,
we can put them back.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
We had been doing an automatic full eviction of the L1 I$
everywhere whenever we did a kernel-space TLB flush. It turns
out this isn't necessary, since all the callers already handle
doing a flush if necessary.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
With this change, tile Linux now supports address-space layout
randomization for shared objects, stack, heap and vdso.
Acked-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Tony Lu <zlu@tilera.com>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
The r1 value is set based on the r0 value as we return to user space.
So tracing tools won't automatically see the right value. Fix this by
generating the correct r1 value in do_syscall_trace_exit() rather
than trying to tamper with the hot path in syscall return.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
The "available_irqs" value needs to actually reflect the IRQs
available, not just start as an all-ones mask, since we only
have 32 IRQs available even on a 64-bit platform.
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>