- The series "zram: optimal post-processing target selection" from

Sergey Senozhatsky improves zram's post-processing selection algorithm.
   This leads to improved memory savings.
 
 - Wei Yang has gone to town on the mapletree code, contributing several
   series which clean up the implementation:
 
 	- "refine mas_mab_cp()"
 	- "Reduce the space to be cleared for maple_big_node"
 	- "maple_tree: simplify mas_push_node()"
 	- "Following cleanup after introduce mas_wr_store_type()"
 	- "refine storing null"
 
 - The series "selftests/mm: hugetlb_fault_after_madv improvements" from
   David Hildenbrand fixes this selftest for s390.
 
 - The series "introduce pte_offset_map_{ro|rw}_nolock()" from Qi Zheng
   implements some rationaizations and cleanups in the page mapping code.
 
 - The series "mm: optimize shadow entries removal" from Shakeel Butt
   optimizes the file truncation code by speeding up the handling of shadow
   entries.
 
 - The series "Remove PageKsm()" from Matthew Wilcox completes the
   migration of this flag over to being a folio-based flag.
 
 - The series "Unify hugetlb into arch_get_unmapped_area functions" from
   Oscar Salvador implements a bunch of consolidations and cleanups in the
   hugetlb code.
 
 - The series "Do not shatter hugezeropage on wp-fault" from Dev Jain
   takes away the wp-fault time practice of turning a huge zero page into
   small pages.  Instead we replace the whole thing with a THP.  More
   consistent cleaner and potentiall saves a large number of pagefaults.
 
 - The series "percpu: Add a test case and fix for clang" from Andy
   Shevchenko enhances and fixes the kernel's built in percpu test code.
 
 - The series "mm/mremap: Remove extra vma tree walk" from Liam Howlett
   optimizes mremap() by avoiding doing things which we didn't need to do.
 
 - The series "Improve the tmpfs large folio read performance" from
   Baolin Wang teaches tmpfs to copy data into userspace at the folio size
   rather than as individual pages.  A 20% speedup was observed.
 
 - The series "mm/damon/vaddr: Fix issue in
   damon_va_evenly_split_region()" fro Zheng Yejian fixes DAMON splitting.
 
 - The series "memcg-v1: fully deprecate charge moving" from Shakeel Butt
   removes the long-deprecated memcgv2 charge moving feature.
 
 - The series "fix error handling in mmap_region() and refactor" from
   Lorenzo Stoakes cleanup up some of the mmap() error handling and
   addresses some potential performance issues.
 
 - The series "x86/module: use large ROX pages for text allocations" from
   Mike Rapoport teaches x86 to use large pages for read-only-execute
   module text.
 
 - The series "page allocation tag compression" from Suren Baghdasaryan
   is followon maintenance work for the new page allocation profiling
   feature.
 
 - The series "page->index removals in mm" from Matthew Wilcox remove
   most references to page->index in mm/.  A slow march towards shrinking
   struct page.
 
 - The series "damon/{self,kunit}tests: minor fixups for DAMON debugfs
   interface tests" from Andrew Paniakin performs maintenance work for
   DAMON's self testing code.
 
 - The series "mm: zswap swap-out of large folios" from Kanchana Sridhar
   improves zswap's batching of compression and decompression.  It is a
   step along the way towards using Intel IAA hardware acceleration for
   this zswap operation.
 
 - The series "kasan: migrate the last module test to kunit" from
   Sabyrzhan Tasbolatov completes the migration of the KASAN built-in tests
   over to the KUnit framework.
 
 - The series "implement lightweight guard pages" from Lorenzo Stoakes
   permits userapace to place fault-generating guard pages within a single
   VMA, rather than requiring that multiple VMAs be created for this.
   Improved efficiencies for userspace memory allocators are expected.
 
 - The series "memcg: tracepoint for flushing stats" from JP Kobryn uses
   tracepoints to provide increased visibility into memcg stats flushing
   activity.
 
 - The series "zram: IDLE flag handling fixes" from Sergey Senozhatsky
   fixes a zram buglet which potentially affected performance.
 
 - The series "mm: add more kernel parameters to control mTHP" from
   Maíra Canal enhances our ability to control/configuremultisize THP from
   the kernel boot command line.
 
 - The series "kasan: few improvements on kunit tests" from Sabyrzhan
   Tasbolatov has a couple of fixups for the KASAN KUnit tests.
 
 - The series "mm/list_lru: Split list_lru lock into per-cgroup scope"
   from Kairui Song optimizes list_lru memory utilization when lockdep is
   enabled.
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCZzwFqgAKCRDdBJ7gKXxA
 jkeuAQCkl+BmeYHE6uG0hi3pRxkupseR6DEOAYIiTv0/l8/GggD/Z3jmEeqnZaNq
 xyyenpibWgUoShU2wZ/Ha8FE5WDINwg=
 =JfWR
 -----END PGP SIGNATURE-----

Merge tag 'mm-stable-2024-11-18-19-27' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull MM updates from Andrew Morton:

 - The series "zram: optimal post-processing target selection" from
   Sergey Senozhatsky improves zram's post-processing selection
   algorithm. This leads to improved memory savings.

 - Wei Yang has gone to town on the mapletree code, contributing several
   series which clean up the implementation:
	- "refine mas_mab_cp()"
	- "Reduce the space to be cleared for maple_big_node"
	- "maple_tree: simplify mas_push_node()"
	- "Following cleanup after introduce mas_wr_store_type()"
	- "refine storing null"

 - The series "selftests/mm: hugetlb_fault_after_madv improvements" from
   David Hildenbrand fixes this selftest for s390.

 - The series "introduce pte_offset_map_{ro|rw}_nolock()" from Qi Zheng
   implements some rationaizations and cleanups in the page mapping
   code.

 - The series "mm: optimize shadow entries removal" from Shakeel Butt
   optimizes the file truncation code by speeding up the handling of
   shadow entries.

 - The series "Remove PageKsm()" from Matthew Wilcox completes the
   migration of this flag over to being a folio-based flag.

 - The series "Unify hugetlb into arch_get_unmapped_area functions" from
   Oscar Salvador implements a bunch of consolidations and cleanups in
   the hugetlb code.

 - The series "Do not shatter hugezeropage on wp-fault" from Dev Jain
   takes away the wp-fault time practice of turning a huge zero page
   into small pages. Instead we replace the whole thing with a THP. More
   consistent cleaner and potentiall saves a large number of pagefaults.

 - The series "percpu: Add a test case and fix for clang" from Andy
   Shevchenko enhances and fixes the kernel's built in percpu test code.

 - The series "mm/mremap: Remove extra vma tree walk" from Liam Howlett
   optimizes mremap() by avoiding doing things which we didn't need to
   do.

 - The series "Improve the tmpfs large folio read performance" from
   Baolin Wang teaches tmpfs to copy data into userspace at the folio
   size rather than as individual pages. A 20% speedup was observed.

 - The series "mm/damon/vaddr: Fix issue in
   damon_va_evenly_split_region()" fro Zheng Yejian fixes DAMON
   splitting.

 - The series "memcg-v1: fully deprecate charge moving" from Shakeel
   Butt removes the long-deprecated memcgv2 charge moving feature.

 - The series "fix error handling in mmap_region() and refactor" from
   Lorenzo Stoakes cleanup up some of the mmap() error handling and
   addresses some potential performance issues.

 - The series "x86/module: use large ROX pages for text allocations"
   from Mike Rapoport teaches x86 to use large pages for
   read-only-execute module text.

 - The series "page allocation tag compression" from Suren Baghdasaryan
   is followon maintenance work for the new page allocation profiling
   feature.

 - The series "page->index removals in mm" from Matthew Wilcox remove
   most references to page->index in mm/. A slow march towards shrinking
   struct page.

 - The series "damon/{self,kunit}tests: minor fixups for DAMON debugfs
   interface tests" from Andrew Paniakin performs maintenance work for
   DAMON's self testing code.

 - The series "mm: zswap swap-out of large folios" from Kanchana Sridhar
   improves zswap's batching of compression and decompression. It is a
   step along the way towards using Intel IAA hardware acceleration for
   this zswap operation.

 - The series "kasan: migrate the last module test to kunit" from
   Sabyrzhan Tasbolatov completes the migration of the KASAN built-in
   tests over to the KUnit framework.

 - The series "implement lightweight guard pages" from Lorenzo Stoakes
   permits userapace to place fault-generating guard pages within a
   single VMA, rather than requiring that multiple VMAs be created for
   this. Improved efficiencies for userspace memory allocators are
   expected.

 - The series "memcg: tracepoint for flushing stats" from JP Kobryn uses
   tracepoints to provide increased visibility into memcg stats flushing
   activity.

 - The series "zram: IDLE flag handling fixes" from Sergey Senozhatsky
   fixes a zram buglet which potentially affected performance.

 - The series "mm: add more kernel parameters to control mTHP" from
   Maíra Canal enhances our ability to control/configuremultisize THP
   from the kernel boot command line.

 - The series "kasan: few improvements on kunit tests" from Sabyrzhan
   Tasbolatov has a couple of fixups for the KASAN KUnit tests.

 - The series "mm/list_lru: Split list_lru lock into per-cgroup scope"
   from Kairui Song optimizes list_lru memory utilization when lockdep
   is enabled.

* tag 'mm-stable-2024-11-18-19-27' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (215 commits)
  cma: enforce non-zero pageblock_order during cma_init_reserved_mem()
  mm/kfence: add a new kunit test test_use_after_free_read_nofault()
  zram: fix NULL pointer in comp_algorithm_show()
  memcg/hugetlb: add hugeTLB counters to memcg
  vmstat: call fold_vm_zone_numa_events() before show per zone NUMA event
  mm: mmap_lock: check trace_mmap_lock_$type_enabled() instead of regcount
  zram: ZRAM_DEF_COMP should depend on ZRAM
  MAINTAINERS/MEMORY MANAGEMENT: add document files for mm
  Docs/mm/damon: recommend academic papers to read and/or cite
  mm: define general function pXd_init()
  kmemleak: iommu/iova: fix transient kmemleak false positive
  mm/list_lru: simplify the list_lru walk callback function
  mm/list_lru: split the lock to per-cgroup scope
  mm/list_lru: simplify reparenting and initial allocation
  mm/list_lru: code clean up for reparenting
  mm/list_lru: don't export list_lru_add
  mm/list_lru: don't pass unnecessary key parameters
  kasan: add kunit tests for kmalloc_track_caller, kmalloc_node_track_caller
  kasan: change kasan_atomics kunit test as KUNIT_CASE_SLOW
  kasan: use EXPORT_SYMBOL_IF_KUNIT to export symbols
  ...
This commit is contained in:
Linus Torvalds 2024-11-23 09:58:07 -08:00
commit 5c00ff742b
323 changed files with 7363 additions and 4404 deletions

View File

@ -47,6 +47,8 @@ The list of possible return codes:
-ENOMEM zram was not able to allocate enough memory to fulfil your
needs.
-EINVAL invalid input has been provided.
-EAGAIN re-try operation later (e.g. when attempting to run recompress
and writeback simultaneously).
======== =============================================================
If you use 'echo', the returned value is set by the 'echo' utility,

View File

@ -90,9 +90,7 @@ Brief summary of control files.
used.
memory.swappiness set/show swappiness parameter of vmscan
(See sysctl's vm.swappiness)
memory.move_charge_at_immigrate set/show controls of moving charges
This knob is deprecated and shouldn't be
used.
memory.move_charge_at_immigrate This knob is deprecated.
memory.oom_control set/show oom controls.
This knob is deprecated and shouldn't be
used.
@ -243,10 +241,6 @@ behind this approach is that a cgroup that aggressively uses a shared
page will eventually get charged for it (once it is uncharged from
the cgroup that brought it in -- this will happen on memory pressure).
But see :ref:`section 8.2 <cgroup-v1-memory-movable-charges>` when moving a
task to another cgroup, its pages may be recharged to the new cgroup, if
move_charge_at_immigrate has been chosen.
2.4 Swap Extension
--------------------------------------
@ -756,78 +750,8 @@ If we want to change this to 1G, we can at any time use::
THIS IS DEPRECATED!
It's expensive and unreliable! It's better practice to launch workload
tasks directly from inside their target cgroup. Use dedicated workload
cgroups to allow fine-grained policy adjustments without having to
move physical pages between control domains.
Users can move charges associated with a task along with task migration, that
is, uncharge task's pages from the old cgroup and charge them to the new cgroup.
This feature is not supported in !CONFIG_MMU environments because of lack of
page tables.
8.1 Interface
-------------
This feature is disabled by default. It can be enabled (and disabled again) by
writing to memory.move_charge_at_immigrate of the destination cgroup.
If you want to enable it::
# echo (some positive value) > memory.move_charge_at_immigrate
.. note::
Each bits of move_charge_at_immigrate has its own meaning about what type
of charges should be moved. See :ref:`section 8.2
<cgroup-v1-memory-movable-charges>` for details.
.. note::
Charges are moved only when you move mm->owner, in other words,
a leader of a thread group.
.. note::
If we cannot find enough space for the task in the destination cgroup, we
try to make space by reclaiming memory. Task migration may fail if we
cannot make enough space.
.. note::
It can take several seconds if you move charges much.
And if you want disable it again::
# echo 0 > memory.move_charge_at_immigrate
.. _cgroup-v1-memory-movable-charges:
8.2 Type of charges which can be moved
--------------------------------------
Each bit in move_charge_at_immigrate has its own meaning about what type of
charges should be moved. But in any case, it must be noted that an account of
a page or a swap can be moved only when it is charged to the task's current
(old) memory cgroup.
+---+--------------------------------------------------------------------------+
|bit| what type of charges would be moved ? |
+===+==========================================================================+
| 0 | A charge of an anonymous page (or swap of it) used by the target task. |
| | You must enable Swap Extension (see 2.4) to enable move of swap charges. |
+---+--------------------------------------------------------------------------+
| 1 | A charge of file pages (normal file, tmpfs file (e.g. ipc shared memory) |
| | and swaps of tmpfs file) mmapped by the target task. Unlike the case of |
| | anonymous pages, file pages (and swaps) in the range mmapped by the task |
| | will be moved even if the task hasn't done page fault, i.e. they might |
| | not be the task's "RSS", but other task's "RSS" that maps the same file. |
| | The mapcount of the page is ignored (the page can be moved independent |
| | of the mapcount). You must enable Swap Extension (see 2.4) to |
| | enable move of swap charges. |
+---+--------------------------------------------------------------------------+
8.3 TODO
--------
- All of moving charge operations are done under cgroup_mutex. It's not good
behavior to hold the mutex too long, so we may need some trick.
Reading memory.move_charge_at_immigrate will always return 0 and writing
to it will always return -EINVAL.
9. Memory thresholds
====================

View File

@ -1655,6 +1655,11 @@ The following nested keys are defined.
pgdemote_khugepaged
Number of pages demoted by khugepaged.
hugetlb
Amount of memory used by hugetlb pages. This metric only shows
up if hugetlb usage is accounted for in memory.current (i.e.
cgroup is mounted with the memory_hugetlb_accounting option).
memory.numa_stat
A read-only nested-keyed file which exists on non-root cgroups.

View File

@ -6711,6 +6711,16 @@
Force threading of all interrupt handlers except those
marked explicitly IRQF_NO_THREAD.
thp_shmem= [KNL]
Format: <size>[KMG],<size>[KMG]:<policy>;<size>[KMG]-<size>[KMG]:<policy>
Control the default policy of each hugepage size for the
internal shmem mount. <policy> is one of policies available
for the shmem mount ("always", "inherit", "never", "within_size",
and "advise").
It can be used multiple times for multiple shmem THP sizes.
See Documentation/admin-guide/mm/transhuge.rst for more
details.
topology= [S390,EARLY]
Format: {off | on}
Specify if the kernel should make use of the cpu
@ -6952,6 +6962,13 @@
See Documentation/admin-guide/mm/transhuge.rst
for more details.
transparent_hugepage_shmem= [KNL]
Format: [always|within_size|advise|never|deny|force]
Can be used to control the hugepage allocation policy for
the internal shmem mount.
See Documentation/admin-guide/mm/transhuge.rst
for more details.
trusted.source= [KEYS]
Format: <string>
This parameter identifies the trust source as a backend

View File

@ -326,6 +326,29 @@ PMD_ORDER THP policy will be overridden. If the policy for PMD_ORDER
is not defined within a valid ``thp_anon``, its policy will default to
``never``.
Similarly to ``transparent_hugepage``, you can control the hugepage
allocation policy for the internal shmem mount by using the kernel parameter
``transparent_hugepage_shmem=<policy>``, where ``<policy>`` is one of the
seven valid policies for shmem (``always``, ``within_size``, ``advise``,
``never``, ``deny``, and ``force``).
In the same manner as ``thp_anon`` controls each supported anonymous THP
size, ``thp_shmem`` controls each supported shmem THP size. ``thp_shmem``
has the same format as ``thp_anon``, but also supports the policy
``within_size``.
``thp_shmem=`` may be specified multiple times to configure all THP sizes
as required. If ``thp_shmem=`` is specified at least once, any shmem THP
sizes not explicitly configured on the command line are implicitly set to
``never``.
``transparent_hugepage_shmem`` setting only affects the global toggle. If
``thp_shmem`` is not specified, PMD_ORDER hugepage will default to
``inherit``. However, if a valid ``thp_shmem`` setting is provided by the
user, the PMD_ORDER hugepage policy will be overridden. If the policy for
PMD_ORDER is not defined within a valid ``thp_shmem``, its policy will
default to ``never``.
Hugepages in tmpfs/shmem
========================
@ -530,10 +553,18 @@ anon_fault_fallback_charge
instead falls back to using huge pages with lower orders or
small pages even though the allocation was successful.
swpout
is incremented every time a huge page is swapped out in one
zswpout
is incremented every time a huge page is swapped out to zswap in one
piece without splitting.
swpin
is incremented every time a huge page is swapped in from a non-zswap
swap device in one piece.
swpout
is incremented every time a huge page is swapped out to a non-zswap
swap device in one piece without splitting.
swpout_fallback
is incremented if a huge page has to be split before swapout.
Usually because failed to allocate some continuous swap space

View File

@ -511,19 +511,14 @@ Tests
~~~~~
There are KASAN tests that allow verifying that KASAN works and can detect
certain types of memory corruptions. The tests consist of two parts:
certain types of memory corruptions.
1. Tests that are integrated with the KUnit Test Framework. Enabled with
``CONFIG_KASAN_KUNIT_TEST``. These tests can be run and partially verified
All KASAN tests are integrated with the KUnit Test Framework and can be enabled
via ``CONFIG_KASAN_KUNIT_TEST``. The tests can be run and partially verified
automatically in a few different ways; see the instructions below.
2. Tests that are currently incompatible with KUnit. Enabled with
``CONFIG_KASAN_MODULE_TEST`` and can only be run as a module. These tests can
only be verified manually by loading the kernel module and inspecting the
kernel log for KASAN reports.
Each KUnit-compatible KASAN test prints one of multiple KASAN reports if an
error is detected. Then the test prints its number and status.
Each KASAN test prints one of multiple KASAN reports if an error is detected.
Then the test prints its number and status.
When a test passes::
@ -550,16 +545,16 @@ Or, if one of the tests failed::
not ok 1 - kasan
There are a few ways to run KUnit-compatible KASAN tests.
There are a few ways to run the KASAN tests.
1. Loadable module
With ``CONFIG_KUNIT`` enabled, KASAN-KUnit tests can be built as a loadable
module and run by loading ``kasan_test.ko`` with ``insmod`` or ``modprobe``.
With ``CONFIG_KUNIT`` enabled, the tests can be built as a loadable module
and run by loading ``kasan_test.ko`` with ``insmod`` or ``modprobe``.
2. Built-In
With ``CONFIG_KUNIT`` built-in, KASAN-KUnit tests can be built-in as well.
With ``CONFIG_KUNIT`` built-in, the tests can be built-in as well.
In this case, the tests will run at boot as a late-init call.
3. Using kunit_tool

View File

@ -161,6 +161,7 @@ See the include/linux/kmemleak.h header for the functions prototype.
- ``kmemleak_free_percpu`` - notify of a percpu memory block freeing
- ``kmemleak_update_trace`` - update object allocation stack trace
- ``kmemleak_not_leak`` - mark an object as not a leak
- ``kmemleak_transient_leak`` - mark an object as a transient leak
- ``kmemleak_ignore`` - do not scan or report an object as leak
- ``kmemleak_scan_area`` - add scan areas inside a memory block
- ``kmemleak_no_scan`` - do not scan a memory block

View File

@ -18,12 +18,17 @@ kconfig options:
missing annotation
Boot parameter:
sysctl.vm.mem_profiling=0|1|never
sysctl.vm.mem_profiling={0|1|never}[,compressed]
When set to "never", memory allocation profiling overhead is minimized and it
cannot be enabled at runtime (sysctl becomes read-only).
When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=y, default value is "1".
When CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT=n, default value is "never".
"compressed" optional parameter will try to store page tag references in a
compact format, avoiding page extensions. This results in improved performance
and memory consumption, however it might fail depending on system configuration.
If compression fails, a warning is issued and memory allocation profiling gets
disabled.
sysctl:
/proc/sys/vm/mem_profiling

View File

@ -37,3 +37,9 @@ with no code but simple configurations.
To utilize and control DAMON from the user-space, please refer to the
administration :doc:`guide </admin-guide/mm/damon/index>`.
If you prefer academic papers for reading and citations, please use the papers
from `HPDC'22 <https://dl.acm.org/doi/abs/10.1145/3502181.3531466>`_ and
`Middleware19 Industry <https://dl.acm.org/doi/abs/10.1145/3366626.3368125>`_ .
Note that those cover DAMON implementations in Linux v5.16 and v5.15,
respectively.

View File

@ -16,9 +16,13 @@ There are helpers to lock/unlock a table and other accessor functions:
- pte_offset_map_lock()
maps PTE and takes PTE table lock, returns pointer to PTE with
pointer to its PTE table lock, or returns NULL if no PTE table;
- pte_offset_map_nolock()
- pte_offset_map_ro_nolock()
maps PTE, returns pointer to PTE with pointer to its PTE table
lock (not taken), or returns NULL if no PTE table;
- pte_offset_map_rw_nolock()
maps PTE, returns pointer to PTE with pointer to its PTE table
lock (not taken) and the value of its pmd entry, or returns NULL
if no PTE table;
- pte_offset_map()
maps PTE, returns pointer to PTE, or returns NULL if no PTE table;
- pte_unmap()

View File

@ -422,16 +422,12 @@ KASAN连接到vmap基础架构以懒清理未使用的影子内存。
~~~~
有一些KASAN测试可以验证KASAN是否正常工作并可以检测某些类型的内存损坏。
测试由两部分组成:
1. 与KUnit测试框架集成的测试。使用 ``CONFIG_KASAN_KUNIT_TEST`` 启用。
这些测试可以通过几种不同的方式自动运行和部分验证;请参阅下面的说明。
所有 KASAN 测试都与 KUnit 测试框架集成,可通过 ``CONFIG_KASAN_KUNIT_TEST`` 启用。
测试可以通过几种不同的方式自动运行和部分验证;请参阅下说明。
2. 与KUnit不兼容的测试。使用 ``CONFIG_KASAN_MODULE_TEST`` 启用并且只能作为模块
运行。这些测试只能通过加载内核模块并检查内核日志以获取KASAN报告来手动验证。
如果检测到错误每个KUnit兼容的KASAN测试都会打印多个KASAN报告之一然后测试打印
其编号和状态。
如果检测到错误,每个 KASAN 测试都会打印多份 KASAN 报告中的一份。
然后测试会打印其编号和状态。
当测试通过::
@ -458,16 +454,16 @@ KASAN连接到vmap基础架构以懒清理未使用的影子内存。
not ok 1 - kasan
有几种方法可以运行与KUnit兼容的KASAN测试。
有几种方法可以运行 KASAN 测试。
1. 可加载模块
启用 ``CONFIG_KUNIT`` 后,KASAN-KUnit测试可以构建为可加载模块并通过使用
``insmod````modprobe`` 加载 ``kasan_test.ko`` 来运行。
启用 ``CONFIG_KUNIT`` 后,可以将测试构建为可加载模块
并通过使用 ``insmod````modprobe`` 加载 ``kasan_test.ko`` 来运行。
2. 内置
通过内置 ``CONFIG_KUNIT`` 也可以内置KASAN-KUnit测试。在这种情况下
通过内置 ``CONFIG_KUNIT``,测试也可以内置。
测试将在启动时作为后期初始化调用运行。
3. 使用kunit_tool

View File

@ -404,16 +404,13 @@ KASAN連接到vmap基礎架構以懶清理未使用的影子內存。
~~~~
有一些KASAN測試可以驗證KASAN是否正常工作並可以檢測某些類型的內存損壞。
測試由兩部分組成:
1. 與KUnit測試框架集成的測試。使用 ``CONFIG_KASAN_KUNIT_TEST`` 啓用。
這些測試可以通過幾種不同的方式自動運行和部分驗證;請參閱下面的說明。
所有 KASAN 測試均與 KUnit 測試框架集成,並且可以啟用
透過 ``CONFIG_KASAN_KUNIT_TEST``。可以運行測試並進行部分驗證
以幾種不同的方式自動進行;請參閱下面的說明。
2. 與KUnit不兼容的測試。使用 ``CONFIG_KASAN_MODULE_TEST`` 啓用並且只能作爲模塊
運行。這些測試只能通過加載內核模塊並檢查內核日誌以獲取KASAN報告來手動驗證。
如果檢測到錯誤每個KUnit兼容的KASAN測試都會打印多個KASAN報告之一然後測試打印
其編號和狀態。
如果偵測到錯誤,每個 KASAN 測試都會列印多個 KASAN 報告之一。
然後測試列印其編號和狀態。
當測試通過::
@ -440,16 +437,16 @@ KASAN連接到vmap基礎架構以懶清理未使用的影子內存。
not ok 1 - kasan
有幾種方法可以運行與KUnit兼容的KASAN測試。
有幾種方法可以執行 KASAN 測試。
1. 可加載模塊
啓用 ``CONFIG_KUNIT``KASAN-KUnit測試可以構建爲可加載模塊並通過使用
``insmod````modprobe`` 加載 ``kasan_test.ko`` 來運行
啟用 ``CONFIG_KUNIT`` 後,測試可以建置為可載入模組
並且透過使用 ``insmod````modprobe`` 來載入 ``kasan_test.ko`` 來運作
2. 內置
通過內置 ``CONFIG_KUNIT`` 也可以內置KASAN-KUnit測試。在這種情況下
透過內建 ``CONFIG_KUNIT``,測試也可以內建。
測試將在啓動時作爲後期初始化調用運行。
3. 使用kunit_tool

View File

@ -14957,6 +14957,8 @@ S: Maintained
W: http://www.linux-mm.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
T: quilt git://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new
F: Documentation/admin-guide/mm/
F: Documentation/mm/
F: include/linux/gfp.h
F: include/linux/gfp_types.h
F: include/linux/memfd.h

View File

@ -1025,6 +1025,14 @@ config ARCH_WANTS_EXECMEM_LATE
enough entropy for module space randomization, for instance
arm64.
config ARCH_HAS_EXECMEM_ROX
bool
depends on MMU && !HIGHMEM
help
For architectures that support allocations of executable memory
with read-only execute permissions. Architecture must implement
execmem_fill_trapping_insns() callback to enable this.
config HAVE_IRQ_EXIT_ON_IRQ_STACK
bool
help

View File

@ -5,3 +5,4 @@ generic-y += agp.h
generic-y += asm-offsets.h
generic-y += kvm_para.h
generic-y += mcs_spinlock.h
generic-y += text-patching.h

View File

@ -14,7 +14,7 @@ extern void clear_page(void *page);
#define clear_user_page(page, vaddr, pg) clear_page(page)
#define vma_alloc_zeroed_movable_folio(vma, vaddr) \
vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr, false)
vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr)
extern void copy_page(void * _to, void * _from);
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)

View File

@ -78,6 +78,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
#define MADV_GUARD_INSTALL 102 /* fatal signal on access to range */
#define MADV_GUARD_REMOVE 103 /* unguard range */
/* compatibility flags */
#define MAP_FILE 0

View File

@ -6,3 +6,4 @@ generic-y += kvm_para.h
generic-y += mcs_spinlock.h
generic-y += parport.h
generic-y += user.h
generic-y += text-patching.h

View File

@ -23,7 +23,7 @@
#include <asm/insn.h>
#include <asm/set_memory.h>
#include <asm/stacktrace.h>
#include <asm/patch.h>
#include <asm/text-patching.h>
/*
* The compiler emitted profiling hook consists of

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
#include <linux/kernel.h>
#include <linux/jump_label.h>
#include <asm/patch.h>
#include <asm/text-patching.h>
#include <asm/insn.h>
static void __arch_jump_label_transform(struct jump_entry *entry,

View File

@ -15,7 +15,7 @@
#include <linux/kgdb.h>
#include <linux/uaccess.h>
#include <asm/patch.h>
#include <asm/text-patching.h>
#include <asm/traps.h>
struct dbg_reg_def_t dbg_reg_def[DBG_MAX_REG_NUM] =

View File

@ -9,7 +9,7 @@
#include <asm/fixmap.h>
#include <asm/smp_plat.h>
#include <asm/opcodes.h>
#include <asm/patch.h>
#include <asm/text-patching.h>
struct patch {
void *addr;

View File

@ -61,32 +61,8 @@ static int do_adjust_pte(struct vm_area_struct *vma, unsigned long address,
return ret;
}
#if defined(CONFIG_SPLIT_PTE_PTLOCKS)
/*
* If we are using split PTE locks, then we need to take the page
* lock here. Otherwise we are using shared mm->page_table_lock
* which is already locked, thus cannot take it.
*/
static inline void do_pte_lock(spinlock_t *ptl)
{
/*
* Use nested version here to indicate that we are already
* holding one similar spinlock.
*/
spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
}
static inline void do_pte_unlock(spinlock_t *ptl)
{
spin_unlock(ptl);
}
#else /* !defined(CONFIG_SPLIT_PTE_PTLOCKS) */
static inline void do_pte_lock(spinlock_t *ptl) {}
static inline void do_pte_unlock(spinlock_t *ptl) {}
#endif /* defined(CONFIG_SPLIT_PTE_PTLOCKS) */
static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
unsigned long pfn)
unsigned long pfn, struct vm_fault *vmf)
{
spinlock_t *ptl;
pgd_t *pgd;
@ -94,6 +70,7 @@ static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
pud_t *pud;
pmd_t *pmd;
pte_t *pte;
pmd_t pmdval;
int ret;
pgd = pgd_offset(vma->vm_mm, address);
@ -112,20 +89,33 @@ static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
if (pmd_none_or_clear_bad(pmd))
return 0;
again:
/*
* This is called while another page table is mapped, so we
* must use the nested version. This also means we need to
* open-code the spin-locking.
*/
pte = pte_offset_map_nolock(vma->vm_mm, pmd, address, &ptl);
pte = pte_offset_map_rw_nolock(vma->vm_mm, pmd, address, &pmdval, &ptl);
if (!pte)
return 0;
do_pte_lock(ptl);
/*
* If we are using split PTE locks, then we need to take the page
* lock here. Otherwise we are using shared mm->page_table_lock
* which is already locked, thus cannot take it.
*/
if (ptl != vmf->ptl) {
spin_lock_nested(ptl, SINGLE_DEPTH_NESTING);
if (unlikely(!pmd_same(pmdval, pmdp_get_lockless(pmd)))) {
pte_unmap_unlock(pte, ptl);
goto again;
}
}
ret = do_adjust_pte(vma, address, pfn, pte);
do_pte_unlock(ptl);
if (ptl != vmf->ptl)
spin_unlock(ptl);
pte_unmap(pte);
return ret;
@ -133,7 +123,8 @@ static int adjust_pte(struct vm_area_struct *vma, unsigned long address,
static void
make_coherent(struct address_space *mapping, struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep, unsigned long pfn)
unsigned long addr, pte_t *ptep, unsigned long pfn,
struct vm_fault *vmf)
{
struct mm_struct *mm = vma->vm_mm;
struct vm_area_struct *mpnt;
@ -160,7 +151,7 @@ make_coherent(struct address_space *mapping, struct vm_area_struct *vma,
if (!(mpnt->vm_flags & VM_MAYSHARE))
continue;
offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;
aliases += adjust_pte(mpnt, mpnt->vm_start + offset, pfn);
aliases += adjust_pte(mpnt, mpnt->vm_start + offset, pfn, vmf);
}
flush_dcache_mmap_unlock(mapping);
if (aliases)
@ -203,7 +194,7 @@ void update_mmu_cache_range(struct vm_fault *vmf, struct vm_area_struct *vma,
__flush_dcache_folio(mapping, folio);
if (mapping) {
if (cache_is_vivt())
make_coherent(mapping, vma, addr, ptep, pfn);
make_coherent(mapping, vma, addr, ptep, pfn, vmf);
else if (vma->vm_flags & VM_EXEC)
__flush_icache_all();
}

View File

@ -25,7 +25,7 @@
#include <asm/cacheflush.h>
#include <linux/percpu.h>
#include <linux/bug.h>
#include <asm/patch.h>
#include <asm/text-patching.h>
#include <asm/sections.h>
#include "../decode-arm.h"

View File

@ -14,7 +14,7 @@
/* for arm_gen_branch */
#include <asm/insn.h>
/* for patch_text */
#include <asm/patch.h>
#include <asm/text-patching.h>
#include "core.h"

View File

@ -110,7 +110,7 @@
#define PAGE_END (_PAGE_END(VA_BITS_MIN))
#endif /* CONFIG_KASAN */
#define PHYSMEM_END __pa(PAGE_END - 1)
#define DIRECT_MAP_PHYSMEM_END __pa(PAGE_END - 1)
#define MIN_THREAD_SHIFT (14 + KASAN_THREAD_SHIFT)

View File

@ -13,6 +13,7 @@ int set_memory_valid(unsigned long addr, int numpages, int enable);
int set_direct_map_invalid_noflush(struct page *page);
int set_direct_map_default_noflush(struct page *page);
int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid);
bool kernel_page_present(struct page *page);
int set_memory_encrypted(unsigned long addr, int numpages);

View File

@ -15,7 +15,7 @@
#include <asm/debug-monitors.h>
#include <asm/ftrace.h>
#include <asm/insn.h>
#include <asm/patching.h>
#include <asm/text-patching.h>
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
struct fregs_offset {

View File

@ -9,7 +9,7 @@
#include <linux/jump_label.h>
#include <linux/smp.h>
#include <asm/insn.h>
#include <asm/patching.h>
#include <asm/text-patching.h>
bool arch_jump_label_transform_queue(struct jump_entry *entry,
enum jump_label_type type)

View File

@ -17,7 +17,7 @@
#include <asm/debug-monitors.h>
#include <asm/insn.h>
#include <asm/patching.h>
#include <asm/text-patching.h>
#include <asm/traps.h>
struct dbg_reg_def_t dbg_reg_def[DBG_MAX_REG_NUM] = {

View File

@ -10,7 +10,7 @@
#include <asm/fixmap.h>
#include <asm/insn.h>
#include <asm/kprobes.h>
#include <asm/patching.h>
#include <asm/text-patching.h>
#include <asm/sections.h>
static DEFINE_RAW_SPINLOCK(patch_lock);

View File

@ -27,7 +27,7 @@
#include <asm/debug-monitors.h>
#include <asm/insn.h>
#include <asm/irq.h>
#include <asm/patching.h>
#include <asm/text-patching.h>
#include <asm/ptrace.h>
#include <asm/sections.h>
#include <asm/system_misc.h>

View File

@ -41,7 +41,7 @@
#include <asm/extable.h>
#include <asm/insn.h>
#include <asm/kprobes.h>
#include <asm/patching.h>
#include <asm/text-patching.h>
#include <asm/traps.h>
#include <asm/smp.h>
#include <asm/stack_pointer.h>

View File

@ -1023,7 +1023,7 @@ struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
if (vma->vm_flags & VM_MTE)
flags |= __GFP_ZEROTAGS;
return vma_alloc_folio(flags, 0, vma, vaddr, false);
return vma_alloc_folio(flags, 0, vma, vaddr);
}
void tag_clear_highpage(struct page *page)

View File

@ -282,7 +282,23 @@ int realm_register_memory_enc_ops(void)
return arm64_mem_crypt_ops_register(&realm_crypt_ops);
}
int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
{
unsigned long addr = (unsigned long)page_address(page);
if (!can_set_direct_map())
return 0;
return set_memory_valid(addr, nr, valid);
}
#ifdef CONFIG_DEBUG_PAGEALLOC
/*
* This is - apart from the return value - doing the same
* thing as the new set_direct_map_valid_noflush() function.
*
* Unify? Explain the conceptual differences?
*/
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
if (!can_set_direct_map())

View File

@ -19,7 +19,7 @@
#include <asm/cacheflush.h>
#include <asm/debug-monitors.h>
#include <asm/insn.h>
#include <asm/patching.h>
#include <asm/text-patching.h>
#include <asm/set_memory.h>
#include "bpf_jit.h"

View File

@ -11,3 +11,4 @@ generic-y += qspinlock.h
generic-y += parport.h
generic-y += user.h
generic-y += vmlinux.lds.h
generic-y += text-patching.h

View File

@ -5,3 +5,4 @@ generic-y += extable.h
generic-y += iomap.h
generic-y += kvm_para.h
generic-y += mcs_spinlock.h
generic-y += text-patching.h

View File

@ -11,3 +11,4 @@ generic-y += ioctl.h
generic-y += mmzone.h
generic-y += statfs.h
generic-y += param.h
generic-y += text-patching.h

View File

@ -16,12 +16,7 @@ static inline int prepare_hugepage_range(struct file *file,
unsigned long len)
{
unsigned long task_size = STACK_TOP;
struct hstate *h = hstate_file(file);
if (len & ~huge_page_mask(h))
return -EINVAL;
if (addr & ~huge_page_mask(h))
return -EINVAL;
if (len > task_size)
return -ENOMEM;
if (task_size - len < addr)

View File

@ -268,8 +268,11 @@ extern void set_pmd_at(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp, pm
*/
extern void pgd_init(void *addr);
extern void pud_init(void *addr);
#define pud_init pud_init
extern void pmd_init(void *addr);
#define pmd_init pmd_init
extern void kernel_pte_init(void *addr);
#define kernel_pte_init kernel_pte_init
/*
* Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that

View File

@ -17,5 +17,6 @@ int set_memory_rw(unsigned long addr, int numpages);
bool kernel_page_present(struct page *page);
int set_direct_map_default_noflush(struct page *page);
int set_direct_map_invalid_noflush(struct page *page);
int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid);
#endif /* _ASM_LOONGARCH_SET_MEMORY_H */

View File

@ -216,3 +216,22 @@ int set_direct_map_invalid_noflush(struct page *page)
return __set_memory(addr, 1, __pgprot(0), __pgprot(_PAGE_PRESENT | _PAGE_VALID));
}
int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
{
unsigned long addr = (unsigned long)page_address(page);
pgprot_t set, clear;
if (addr < vm_map_base)
return 0;
if (valid) {
set = PAGE_KERNEL;
clear = __pgprot(0);
} else {
set = __pgprot(0);
clear = __pgprot(_PAGE_PRESENT | _PAGE_VALID);
}
return __set_memory(addr, 1, set, clear);
}

View File

@ -4,3 +4,4 @@ generic-y += extable.h
generic-y += kvm_para.h
generic-y += mcs_spinlock.h
generic-y += spinlock.h
generic-y += text-patching.h

View File

@ -14,7 +14,7 @@ extern unsigned long memory_end;
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
#define vma_alloc_zeroed_movable_folio(vma, vaddr) \
vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr, false)
vma_alloc_folio(GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr)
#define __pa(vaddr) ((unsigned long)(vaddr))
#define __va(paddr) ((void *)((unsigned long)(paddr)))

View File

@ -8,3 +8,4 @@ generic-y += parport.h
generic-y += syscalls.h
generic-y += tlb.h
generic-y += user.h
generic-y += text-patching.h

View File

@ -13,3 +13,4 @@ generic-y += parport.h
generic-y += qrwlock.h
generic-y += qspinlock.h
generic-y += user.h
generic-y += text-patching.h

View File

@ -17,12 +17,7 @@ static inline int prepare_hugepage_range(struct file *file,
unsigned long len)
{
unsigned long task_size = STACK_TOP;
struct hstate *h = hstate_file(file);
if (len & ~huge_page_mask(h))
return -EINVAL;
if (addr & ~huge_page_mask(h))
return -EINVAL;
if (len > task_size)
return -ENOMEM;
if (task_size - len < addr)

View File

@ -317,7 +317,9 @@ static inline pmd_t *pud_pgtable(pud_t pud)
*/
extern void pgd_init(void *addr);
extern void pud_init(void *addr);
#define pud_init pud_init
extern void pmd_init(void *addr);
#define pmd_init pmd_init
/*
* Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that

View File

@ -105,6 +105,9 @@
#define MADV_COLLAPSE 25 /* Synchronous hugepage collapse */
#define MADV_GUARD_INSTALL 102 /* fatal signal on access to range */
#define MADV_GUARD_REMOVE 103 /* unguard range */
/* compatibility flags */
#define MAP_FILE 0

View File

@ -7,3 +7,4 @@ generic-y += kvm_para.h
generic-y += mcs_spinlock.h
generic-y += spinlock.h
generic-y += user.h
generic-y += text-patching.h

View File

@ -9,3 +9,4 @@ generic-y += spinlock.h
generic-y += qrwlock_types.h
generic-y += qrwlock.h
generic-y += user.h
generic-y += text-patching.h

View File

@ -12,21 +12,6 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
pte_t *ptep);
/*
* If the arch doesn't supply something else, assume that hugepage
* size aligned regions are ok without further preparation.
*/
#define __HAVE_ARCH_PREPARE_HUGEPAGE_RANGE
static inline int prepare_hugepage_range(struct file *file,
unsigned long addr, unsigned long len)
{
if (len & ~HPAGE_MASK)
return -EINVAL;
if (addr & ~HPAGE_MASK)
return -EINVAL;
return 0;
}
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep)

View File

@ -75,6 +75,9 @@
#define MADV_HWPOISON 100 /* poison a page for testing */
#define MADV_SOFT_OFFLINE 101 /* soft offline page for testing */
#define MADV_GUARD_INSTALL 102 /* fatal signal on access to range */
#define MADV_GUARD_REMOVE 103 /* unguard range */
/* compatibility flags */
#define MAP_FILE 0

View File

@ -20,7 +20,7 @@
#include <asm/assembly.h>
#include <asm/sections.h>
#include <asm/ftrace.h>
#include <asm/patch.h>
#include <asm/text-patching.h>
#define __hot __section(".text.hot")

View File

@ -8,7 +8,7 @@
#include <linux/jump_label.h>
#include <linux/bug.h>
#include <asm/alternative.h>
#include <asm/patch.h>
#include <asm/text-patching.h>
static inline int reassemble_17(int as17)
{

View File

@ -16,7 +16,7 @@
#include <asm/ptrace.h>
#include <asm/traps.h>
#include <asm/processor.h>
#include <asm/patch.h>
#include <asm/text-patching.h>
#include <asm/cacheflush.h>
const struct kgdb_arch arch_kgdb_ops = {

View File

@ -12,7 +12,7 @@
#include <linux/kprobes.h>
#include <linux/slab.h>
#include <asm/cacheflush.h>
#include <asm/patch.h>
#include <asm/text-patching.h>
DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);

View File

@ -13,7 +13,7 @@
#include <asm/cacheflush.h>
#include <asm/fixmap.h>
#include <asm/patch.h>
#include <asm/text-patching.h>
struct patch {
void *addr;

View File

@ -21,27 +21,6 @@
#include <asm/mmu_context.h>
unsigned long
hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct hstate *h = hstate_file(file);
if (len & ~huge_page_mask(h))
return -EINVAL;
if (len > TASK_SIZE)
return -ENOMEM;
if (flags & MAP_FIXED)
if (prepare_hugepage_range(file, addr, len))
return -EINVAL;
if (addr)
addr = ALIGN(addr, huge_page_size(h));
/* we need to make sure the colouring is OK */
return arch_get_unmapped_area(file, addr, len, pgoff, flags, 0);
}
pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma,

View File

@ -21,7 +21,7 @@
#include <linux/percpu.h>
#include <linux/module.h>
#include <asm/probes.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#ifdef CONFIG_KPROBES
#define __ARCH_WANT_KPROBES_INSN_SLOT

View File

@ -13,7 +13,7 @@
#include <linux/io.h>
#include <linux/memblock.h>
#include <linux/of.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/kdump.h>
#include <asm/firmware.h>
#include <linux/uio.h>

View File

@ -9,7 +9,7 @@
#include <linux/of_fdt.h>
#include <asm/epapr_hcalls.h>
#include <asm/cacheflush.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/machdep.h>
#include <asm/inst.h>

View File

@ -5,7 +5,7 @@
#include <linux/kernel.h>
#include <linux/jump_label.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/inst.h>
void arch_jump_label_transform(struct jump_entry *entry,

View File

@ -21,7 +21,7 @@
#include <asm/processor.h>
#include <asm/machdep.h>
#include <asm/debug.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <linux/slab.h>
#include <asm/inst.h>

View File

@ -21,7 +21,7 @@
#include <linux/slab.h>
#include <linux/set_memory.h>
#include <linux/execmem.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/cacheflush.h>
#include <asm/sstep.h>
#include <asm/sections.h>

View File

@ -18,7 +18,7 @@
#include <linux/bug.h>
#include <linux/sort.h>
#include <asm/setup.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
/* Count how many different relocations (different symbol, different
addend) */

View File

@ -17,7 +17,7 @@
#include <linux/kernel.h>
#include <asm/module.h>
#include <asm/firmware.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <linux/sort.h>
#include <asm/setup.h>
#include <asm/sections.h>

View File

@ -13,7 +13,7 @@
#include <asm/kprobes.h>
#include <asm/ptrace.h>
#include <asm/cacheflush.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/sstep.h>
#include <asm/ppc-opcode.h>
#include <asm/inst.h>

View File

@ -54,7 +54,7 @@
#include <asm/firmware.h>
#include <asm/hw_irq.h>
#endif
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/exec.h>
#include <asm/livepatch.h>
#include <asm/cpu_has_feature.h>

View File

@ -14,7 +14,7 @@
#include <linux/debugfs.h>
#include <asm/asm-prototypes.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/security_features.h>
#include <asm/sections.h>
#include <asm/setup.h>

View File

@ -40,7 +40,7 @@
#include <asm/time.h>
#include <asm/serial.h>
#include <asm/udbg.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/cpu_has_feature.h>
#include <asm/asm-prototypes.h>
#include <asm/kdump.h>

View File

@ -60,7 +60,7 @@
#include <asm/xmon.h>
#include <asm/udbg.h>
#include <asm/kexec.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/ftrace.h>
#include <asm/opal.h>
#include <asm/cputhreads.h>

View File

@ -2,7 +2,7 @@
#include <linux/memory.h>
#include <linux/static_call.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
void arch_static_call_transform(void *site, void *tramp, void *func, bool tail)
{

View File

@ -23,7 +23,7 @@
#include <linux/list.h>
#include <asm/cacheflush.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/ftrace.h>
#include <asm/syscall.h>
#include <asm/inst.h>

View File

@ -23,7 +23,7 @@
#include <linux/list.h>
#include <asm/cacheflush.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/ftrace.h>
#include <asm/syscall.h>
#include <asm/inst.h>

View File

@ -17,7 +17,7 @@
#include <asm/tlb.h>
#include <asm/tlbflush.h>
#include <asm/page.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/inst.h>
static int __patch_mem(void *exec_addr, unsigned long val, void *patch_addr, bool is_dword)

View File

@ -16,7 +16,7 @@
#include <linux/sched/mm.h>
#include <linux/stop_machine.h>
#include <asm/cputable.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/interrupt.h>
#include <asm/page.h>
#include <asm/sections.h>

View File

@ -6,7 +6,7 @@
#include <linux/vmalloc.h>
#include <linux/init.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
static int __init instr_is_branch_to_addr(const u32 *instr, unsigned long addr)
{

View File

@ -11,7 +11,7 @@
#include <asm/cpu_has_feature.h>
#include <asm/sstep.h>
#include <asm/ppc-opcode.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/inst.h>
#define MAX_SUBTESTS 16

View File

@ -25,7 +25,7 @@
#include <asm/mmu.h>
#include <asm/machdep.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/sections.h>
#include <mm/mmu_decl.h>

View File

@ -57,7 +57,7 @@
#include <asm/sections.h>
#include <asm/copro.h>
#include <asm/udbg.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/fadump.h>
#include <asm/firmware.h>
#include <asm/tm.h>

View File

@ -24,7 +24,7 @@
#include <linux/pgtable.h>
#include <asm/udbg.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include "internal.h"

View File

@ -633,6 +633,20 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
}
EXPORT_SYMBOL_GPL(slice_get_unmapped_area);
#ifdef CONFIG_HUGETLB_PAGE
static int file_to_psize(struct file *file)
{
struct hstate *hstate = hstate_file(file);
return shift_to_mmu_psize(huge_page_shift(hstate));
}
#else
static int file_to_psize(struct file *file)
{
return 0;
}
#endif
unsigned long arch_get_unmapped_area(struct file *filp,
unsigned long addr,
unsigned long len,
@ -640,11 +654,17 @@ unsigned long arch_get_unmapped_area(struct file *filp,
unsigned long flags,
vm_flags_t vm_flags)
{
unsigned int psize;
if (radix_enabled())
return generic_get_unmapped_area(filp, addr, len, pgoff, flags, vm_flags);
return slice_get_unmapped_area(addr, len, flags,
mm_ctx_user_psize(&current->mm->context), 0);
if (filp && is_file_hugepages(filp))
psize = file_to_psize(filp);
else
psize = mm_ctx_user_psize(&current->mm->context);
return slice_get_unmapped_area(addr, len, flags, psize, 0);
}
unsigned long arch_get_unmapped_area_topdown(struct file *filp,
@ -654,11 +674,17 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp,
const unsigned long flags,
vm_flags_t vm_flags)
{
unsigned int psize;
if (radix_enabled())
return generic_get_unmapped_area_topdown(filp, addr0, len, pgoff, flags, vm_flags);
return slice_get_unmapped_area(addr0, len, flags,
mm_ctx_user_psize(&current->mm->context), 1);
if (filp && is_file_hugepages(filp))
psize = file_to_psize(filp);
else
psize = mm_ctx_user_psize(&current->mm->context);
return slice_get_unmapped_area(addr0, len, flags, psize, 1);
}
unsigned int notrace get_slice_psize(struct mm_struct *mm, unsigned long addr)
@ -788,20 +814,4 @@ unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
return 1UL << mmu_psize_to_shift(get_slice_psize(vma->vm_mm, vma->vm_start));
}
static int file_to_psize(struct file *file)
{
struct hstate *hstate = hstate_file(file);
return shift_to_mmu_psize(huge_page_shift(hstate));
}
unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff,
unsigned long flags)
{
if (radix_enabled())
return generic_hugetlb_get_unmapped_area(file, addr, len, pgoff, flags);
return slice_get_unmapped_area(addr, len, flags, file_to_psize(file), 1);
}
#endif

View File

@ -7,7 +7,7 @@
#include <linux/memblock.h>
#include <linux/sched/task.h>
#include <asm/pgalloc.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <mm/mmu_decl.h>
static pgprot_t __init kasan_prot_ro(void)

View File

@ -26,7 +26,7 @@
#include <asm/svm.h>
#include <asm/mmzone.h>
#include <asm/ftrace.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/setup.h>
#include <asm/fixmap.h>

View File

@ -24,7 +24,7 @@
#include <asm/mmu.h>
#include <asm/page.h>
#include <asm/cacheflush.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/smp.h>
#include <mm/mmu_decl.h>

View File

@ -10,7 +10,7 @@
#include <asm/pgalloc.h>
#include <asm/tlb.h>
#include <asm/dma.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <mm/mmu_decl.h>

View File

@ -37,7 +37,7 @@
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/tlb.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/cputhreads.h>
#include <asm/hugetlb.h>
#include <asm/paca.h>

View File

@ -24,7 +24,7 @@
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include <asm/tlb.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/cputhreads.h>
#include <mm/mmu_decl.h>

View File

@ -398,7 +398,7 @@ void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
*/
if (pmd_none(*pmd))
return;
pte = pte_offset_map_nolock(mm, pmd, addr, &ptl);
pte = pte_offset_map_ro_nolock(mm, pmd, addr, &ptl);
BUG_ON(!pte);
assert_spin_locked(ptl);
pte_unmap(pte);

View File

@ -18,7 +18,7 @@
#include <linux/bpf.h>
#include <asm/kprobes.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include "bpf_jit.h"

View File

@ -14,7 +14,7 @@
#include <asm/machdep.h>
#include <asm/firmware.h>
#include <asm/ptrace.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/inst.h>
#define PERF_8xx_ID_CPU_CYCLES 1

View File

@ -16,7 +16,7 @@
#include <asm/machdep.h>
#include <asm/firmware.h>
#include <asm/ptrace.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/hw_irq.h>
#include <asm/interrupt.h>

View File

@ -23,7 +23,7 @@
#include <asm/mpic.h>
#include <asm/cacheflush.h>
#include <asm/dbell.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/cputhreads.h>
#include <asm/fsl_pm.h>

View File

@ -12,7 +12,7 @@
#include <linux/delay.h>
#include <linux/pgtable.h>
#include <asm/code-patching.h>
#include <asm/text-patching.h>
#include <asm/page.h>
#include <asm/pci-bridge.h>
#include <asm/mpic.h>

Some files were not shown because too many files have changed in this diff Show More