9569 Commits

Author SHA1 Message Date
Stephen Rothwell
ada24f0158 Merge branch 'mm-everything' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 2025-01-13 09:34:41 +11:00
Stephen Rothwell
629d79aa0d Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 2025-01-13 09:34:40 +11:00
Andrew Morton
972ccc84da foo 2025-01-11 23:20:51 -08:00
Kuan-Wei Chiu
6612ac8c62 lib/list_sort: clarify comparison function requirements in list_sort()
Add a detailed explanation in the list_sort() kernel doc comment
specifying that the comparison function must satisfy antisymmetry and
transitivity.  These properties are essential for the sorting algorithm to
produce correct results.

Issues have arisen in the past [1][2][3][4] where comparison functions
violated the transitivity property, causing sorting algorithms to fail to
correctly order elements.  While these requirements may seem
straightforward, they are commonly misunderstood or overlooked, leading to
bugs.  Highlighting these properties in the documentation will help
prevent such mistakes in the future.

Link: https://lore.kernel.org/lkml/20240701205639.117194-1-visitorckw@gmail.com [1]
Link: https://lore.kernel.org/lkml/20241203202228.1274403-1-visitorckw@gmail.com [2]
Link: https://lore.kernel.org/lkml/20241209134226.1939163-1-visitorckw@gmail.com [3]
Link: https://lore.kernel.org/lkml/20241209145728.1975311-1-visitorckw@gmail.com [4]
Link: https://lkml.kernel.org/r/20250106170104.3137845-3-visitorckw@gmail.com
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Cc: Ching-Chun (Jim) Huang <jserv@ccns.ncku.edu.tw>
Cc: <chuang@cs.nycu.edu.tw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:43 -08:00
Kuan-Wei Chiu
210c73092f lib/sort: clarify comparison function requirements in sort_r()
Patch series "lib: clarify comparison function requirements", v2.

Add a detailed explanation in the sort_r/list_sort kernel doc comment
specifying that the comparison function must satisfy antisymmetry and
transitivity.  These properties are essential for the sorting algorithm to
produce correct results.

Issues have arisen in the past [1][2][3][4] where comparison functions
violated the transitivity property, causing sorting algorithms to fail to
correctly order elements.  While these requirements may seem
straightforward, they are commonly misunderstood or overlooked, leading to
bugs.  Highlighting these properties in the documentation will help
prevent such mistakes in the future.

Link: https://lore.kernel.org/lkml/20240701205639.117194-1-visitorckw@gmail.com [1]
Link: https://lore.kernel.org/lkml/20241203202228.1274403-1-visitorckw@gmail.com [2]
Link: https://lore.kernel.org/lkml/20241209134226.1939163-1-visitorckw@gmail.com [3]
Link: https://lore.kernel.org/lkml/20241209145728.1975311-1-visitorckw@gmail.com [4]


This patch (of 2):

Add a detailed explanation in the sort_r() kernel doc comment specifying
that the comparison function must satisfy antisymmetry and transitivity. 
These properties are essential for the sorting algorithm to produce
correct results.

Issues have arisen in the past [1][2][3][4] where comparison functions
violated the transitivity property, causing sorting algorithms to fail to
correctly order elements.  While these requirements may seem
straightforward, they are commonly misunderstood or overlooked, leading to
bugs.  Highlighting these properties in the documentation will help
prevent such mistakes in the future.

Link: https://lkml.kernel.org/r/20250106170104.3137845-1-visitorckw@gmail.com
Link: https://lore.kernel.org/lkml/20240701205639.117194-1-visitorckw@gmail.com [1]
Link: https://lore.kernel.org/lkml/20241203202228.1274403-1-visitorckw@gmail.com [2]
Link: https://lore.kernel.org/lkml/20241209134226.1939163-1-visitorckw@gmail.com [3]
Link: https://lore.kernel.org/lkml/20241209145728.1975311-1-visitorckw@gmail.com [4]
Link: https://lkml.kernel.org/r/20250106170104.3137845-2-visitorckw@gmail.com
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Cc: Ching-Chun (Jim) Huang <jserv@ccns.ncku.edu.tw>
Cc: <chuang@cs.nycu.edu.tw>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:43 -08:00
Ariel Otilibili
5618ee438d lib/inflate.c: remove dead code
This is a follow up from a discussion in Xen:

The if-statement tests that `res` is non-zero; meaning the case zero is
never reached.

Link: https://lore.kernel.org/all/7587b503-b2ca-4476-8dc9-e9683d4ca5f0@suse.com/
Link: https://lkml.kernel.org/r/20241219092615.644642-2-ariel.otilibili-anieli@eurecom.fr
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Ariel Otilibili <ariel.otilibili-anieli@eurecom.fr>
Suggested-by: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Anthony PERARD <anthony.perard@vates.tech>
Cc: Michal Orzel <michal.orzel@amd.com>
Cc: Julien Grall <julien@xen.org>
Cc: Roger Pau Monné <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:38 -08:00
Matthew Wilcox (Oracle)
0bda6453f8 iov_iter: remove setting of page->index
Nothing actually checks page->index, so just remove it.

Link: https://lkml.kernel.org/r/20241216161253.37687-1-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:37 -08:00
Luis Felipe Hernandez
243bfc0ca7 lib/math: add int_sqrt test suite
Adds test suite for integer based square root function.

The test suite is designed to verify the correctness of the int_sqrt()
math library function.

Link: https://lkml.kernel.org/r/20241213042701.1037467-1-luis.hernandez093@gmail.com
Signed-off-by: Luis Felipe Hernandez <luis.hernandez093@gmail.com>
Reviewed-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Cc: David Gow <davidgow@google.com>
Cc: Ricardo B. Marliere <rbm@suse.com>
Cc: Shuah Khan <skhan@linuxfoundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:31 -08:00
Kemeng Shi
7e618a7051 Xarray: use xa_mark_t in xas_squash_marks() to keep code consistent
Besides xas_squash_marks(), all functions use xa_mark_t type to iterate
all possible marks.  Use xa_mark_t in xas_squash_marks() to keep code
consistent.

Link: https://lkml.kernel.org/r/20241213122523.12764-6-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:30 -08:00
Kemeng Shi
f51e20d11a Xarray: remove repeat check in xas_squash_marks()
Caller of xas_squash_marks() has ensured xas->xa_sibs is non-zero.  Just
remove repeat check of xas->xa_sibs in xas_squash_marks().

Link: https://lkml.kernel.org/r/20241213122523.12764-5-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:30 -08:00
Kemeng Shi
2f984bc655 Xarray: distinguish large entries correctly in xas_split_alloc()
We don't support large entries which expand two more level xa_node in
split.  For case "xas->xa_shift + 2 * XA_CHUNK_SHIFT == order", we also
need two level of xa_node to expand.  Distinguish entry as large entry in
case "xas->xa_shift + 2 * XA_CHUNK_SHIFT == order".

As max order of folio in pagecache (MAX_PAGECACHE_ORDER) is <=
(XA_CHUNK_SHIFT * 2 - 1), this change is more likely a cleanup...

Link: https://lkml.kernel.org/r/20241213122523.12764-4-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:30 -08:00
Kemeng Shi
f22e86663e Xarray: move forward index correctly in xas_pause()
After xas_load(), xas->index could point to mid of found multi-index entry
and xas->index's bits under node->shift maybe non-zero.  The afterward
xas_pause() will move forward xas->index with xa->node->shift with bits
under node->shift un-masked and thus skip some index unexpectedly.

Consider following case:
Assume XA_CHUNK_SHIFT is 4.
xa_store_range(xa, 16, 31, ...)
xa_store(xa, 32, ...)
XA_STATE(xas, xa, 17);
xas_for_each(&xas,...)
xas_load(&xas)
/* xas->index = 17, xas->xa_offset = 1, xas->xa_node->xa_shift = 4 */
xas_pause()
/* xas->index = 33, xas->xa_offset = 2, xas->xa_node->xa_shift = 4 */
As we can see, index of 32 is skipped unexpectedly.

Fix this by mask bit under node->xa_shift when move forward index in
xas_pause().

For now, this will not cause serious problems.  Only minor problem like
cachestat return less number of page status could happen.

Link: https://lkml.kernel.org/r/20241213122523.12764-3-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:30 -08:00
Kemeng Shi
528a5ef711 Xarray: do not return sibling entries from xas_find_marked()
Patch series "Fixes and cleanups to xarray", v3.

This series contains some random fixes and cleanups to xarray. Patch 1-2
are fixes and patch 3-6 are cleanups. More details can be found in
respective patches.


This patch (of 5):

Similar to issue fixed in commit cbc02854331ed ("XArray: Do not return
sibling entries from xa_load()"), we may return sibling entries from
xas_find_marked as following:

    Thread A:               Thread B:
                            xa_store_range(xa, entry, 6, 7, gfp);
			    xa_set_mark(xa, 6, mark)
    XA_STATE(xas, xa, 6);
    xas_find_marked(&xas, 7, mark);
    offset = xas_find_chunk(xas, advance, mark);
    [offset is 6 which points to a valid entry]
                            xa_store_range(xa, entry, 4, 7, gfp);
    entry = xa_entry(xa, node, 6);
    [entry is a sibling of 4]
    if (!xa_is_node(entry))
        return entry;

Skip sibling entry like xas_find() does to protect caller from seeing
sibling entry from xas_find_marked() or caller may use sibling entry as a
valid entry and crash the kernel.

Besides, load_race() test is modified to catch mentioned issue and
modified load_race() only passes after this fix is merged.

Here is an example how this bug could be triggerred in theory in nfs which
enables large folio in mapping:
Let's take a look at involved racer:
1. How pages could be created and dirtied in nfs.
write
 ksys_write
  vfs_write
   new_sync_write
    nfs_file_write
     generic_perform_write
      nfs_write_begin
       fgf_set_order
        __filemap_get_folio
      nfs_write_end
       nfs_update_folio
        nfs_writepage_setup
	 nfs_mark_request_dirty
	  filemap_dirty_folio
	   __folio_mark_dirty
	    __xa_set_mark

2. How dirty pages could be deleted in nfs.
ioctl
 do_vfs_ioctl
  file_ioctl
   ioctl_preallocate
    vfs_fallocate
     nfs42_fallocate
      nfs42_proc_deallocate
       truncate_pagecache_range
        truncate_inode_pages_range
	 truncate_inode_folio
	  filemap_remove_folio
	   page_cache_delete
	    xas_store(&xas, NULL);

3. How dirty pages could be lockless searched
sync_file_range
 ksys_sync_file_range
  __filemap_fdatawrite_range
   filemap_fdatawrite_wbc
    do_writepages
     writeback_use_writepage
      writeback_iter
       writeback_get_folio
        filemap_get_folios_tag
         find_get_entry
          folio = xas_find_marked()
          folio_try_get(folio)

In theory, kernel will crash as following:
1.Create               2.Search             3.Delete
/* write page 2,3 */
write
 ...
  nfs_write_begin
   fgf_set_order
   __filemap_get_folio
    ...
     /* index = 2, order = 1 */
     xa_store(&xas, folio)
  nfs_write_end
   ...
    __folio_mark_dirty

                       /* sync page 2 and page 3 */
                       sync_file_range
                        ...
                         find_get_entry
                          folio = xas_find_marked()
                          /* offset will be 2 */
                          offset = xas_find_chunk()

                                             /* delete page 2 and page 3 */
                                             ioctl
                                              ...
                                               xas_store(&xas, NULL);

/* write page 0-3 */
write
 ...
  nfs_write_begin
   fgf_set_order
   __filemap_get_folio
    ...
     /* index = 0, order = 2 */
     xa_store(&xas, folio)
  nfs_write_end
   ...
    __folio_mark_dirty

                          /* get sibling entry from offset 2 */
                          entry = xa_entry(.., 2)
                          /* use sibling entry as folio and crash kernel */
                          folio_try_get(folio)

Link: https://lkml.kernel.org/r/20241218154613.58754-2-shikemeng@huaweicloud.com
Link: https://lkml.kernel.org/r/20241213122523.12764-1-shikemeng@huaweicloud.com
Link: https://lkml.kernel.org/r/20241213122523.12764-2-shikemeng@huaweicloud.com
Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:29 -08:00
Andrew Morton
65be08d27e fault-inject-use-prandom-where-cryptographically-secure-randomness-is-not-needed-fix
tweak and reflow comment

Cc: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:23 -08:00
Akinobu Mita
dfc929ac37 fault-inject: use prandom where cryptographically secure randomness is not needed
Currently get_random*() is used to determine the probability of fault
injection, but cryptographically secure random numbers are not required.

There is no big problem in using prandom instead of get_random*() to
determine the probability of fault injection, and it also avoids acquiring
a spinlock, which is unsafe in some contexts.

Link: https://lore.kernel.org/lkml/20241129120939.GG35539@noisy.programming.kicks-ass.net
Link: https://lkml.kernel.org/r/20241208142415.205960-1-akinobu.mita@gmail.com
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:23 -08:00
Andrew Morton
41756d1129 xarray-port-tests-to-kunit-fix
Fix cocci warning:

lib/test_xarray.c:1019:52-53: WARNING comparing pointer to 0

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202412081700.YXB3vBbg-lkp@intel.com/
Cc: Tamir Duberstein <tamird@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:22 -08:00
Tamir Duberstein
31068df168 xarray: port tests to kunit
Minimally rewrite the XArray unit tests to use kunit.  This integrates
nicely with existing kunit tools which produce nicer human-readable output
compared to the existing machinery.

Running the xarray tests before this change requires an obscure
invocation

```
tools/testing/kunit/kunit.py run --arch arm64 --make_options LLVM=1 \
  --kconfig_add CONFIG_TEST_XARRAY=y --raw_output=all nothing
```

which on failure produces

```
BUG at check_reserve:513
...
XArray: 6782340 of 6782364 tests passed
```

and exits 0.

Running the xarray tests after this change requires a simpler invocation

```
tools/testing/kunit/kunit.py run --arch arm64 --make_options LLVM=1 \
  xarray
```

which on failure produces (colors omitted)

```
[09:50:53] ====================== check_reserve  ======================
[09:50:53] [FAILED] param-0
[09:50:53]     # check_reserve: EXPECTATION FAILED at lib/test_xarray.c:536
[09:50:53] xa_erase(xa, 12345678) != NULL
...
[09:50:53]     # module: test_xarray
[09:50:53] # xarray: pass:26 fail:3 skip:0 total:29
[09:50:53] # Totals: pass:28 fail:3 skip:0 total:31
[09:50:53] ===================== [FAILED] xarray ======================
```

and exits 1.

Use of richer kunit assertions is intentionally omitted to reduce the
scope of the change.

Link: https://lkml.kernel.org/r/20241205-xarray-kunit-port-v1-1-ee44bc7aa201@gmail.com
Signed-off-by: Tamir Duberstein <tamird@gmail.com>
Cc: Bill Wendling <morbo@google.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Justin Stitt <justinstitt@google.com>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Naveen N Rao <naveen@kernel.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:22 -08:00
Tamir Duberstein
8becc9d819 xarray-extract-helper-from-__xa_insertcmpxchg-fix
fix __xa_erase()

Link: https://lkml.kernel.org/r/CAJ-ks9kN_qddZ3Ne5d=cADu5POC1rHd4rQcbVSD_spnZOrLLZg@mail.gmail.com
Signed-off-by: Tamir Duberstein <tamird@gmail.com>
Reported-by: <syzbot+092bbab7da235a02a03a@syzkaller.appspotmail.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:20 -08:00
Tamir Duberstein
5801c2876b xarray: extract helper from __xa_{insert,cmpxchg}
Reduce code duplication by extracting a static inline function.  This
function is identical to __xa_cmpxchg with the exception that it does not
coerce zero entries to null on the return path.

Link: https://lkml.kernel.org/r/20241112-xarray-insert-cmpxchg-v1-2-dc2bdd8c4136@gmail.com
Signed-off-by: Tamir Duberstein <tamird@gmail.com>
Cc: Alice Ryhl <aliceryhl@google.com>
Cc: Andreas Hindborg <a.hindborg@kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:20 -08:00
Tamir Duberstein
e5a5560bcb xarray: extract xa_zero_to_null
Patch series "xarray: extract __xa_cmpxchg_raw".

This series reduces duplication between __xa_cmpxchg and __xa_insert by
extracting a new function that does not coerce zero entries to null on the
return path.

The new function may be used by the upcoming Rust xarray abstraction in
its reservation API where it is useful to tell the difference between zero
entries and null slots.


This patch (of 2):

Reduce code duplication by extracting a static inline function that
returns its argument if it is non-zero and NULL otherwise.

This changes xas_result to check for errors before checking for zero but
this cannot change the behavior of existing callers:
- __xa_erase: passes the result of xas_store(_, NULL) which cannot fail.
- __xa_store: passes the result of xas_store(_, entry) which may fail.
  xas_store calls xas_create when entry is not NULL which returns NULL
  on error, which is immediately checked. This should not change
  observable behavior.
- __xa_cmpxchg: passes the result of xas_load(_) which might be zero.
  This would previously return NULL regardless of the outcome of
  xas_store but xas_store cannot fail if xas_load returns zero
  because there is no need to allocate memory.
- xa_store_range: same as __xa_erase.

Link: https://lkml.kernel.org/r/20241112-xarray-insert-cmpxchg-v1-0-dc2bdd8c4136@gmail.com
Link: https://lkml.kernel.org/r/20241112-xarray-insert-cmpxchg-v1-1-dc2bdd8c4136@gmail.com
Signed-off-by: Tamir Duberstein <tamird@gmail.com>
Cc: Alice Ryhl <aliceryhl@google.com>
Cc: Andreas Hindborg <a.hindborg@kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:20 -08:00
Kuan-Wei Chiu
005e3ec8ca lib/test_min_heap: use inline min heap variants to reduce attack vector
To address concerns about increasing the attack vector, remove the select
MIN_HEAP dependency from TEST_MIN_HEAP in Kconfig.debug.

Additionally, all min heap test function calls in lib/test_min_heap.c are
replaced with their inline variants.  By exclusively using inline
variants, we eliminate the need to enable CONFIG_MIN_HEAP for testing
purposes.

Link: https://lore.kernel.org/lkml/CAMuHMdVO5DPuD9HYWBFqKDHphx7+0BEhreUxtVC40A=8p6VAhQ@mail.gmail.com
Link: https://lkml.kernel.org/r/20241129181222.646855-3-visitorckw@gmail.com
Signed-off-by: Kuan-Wei Chiu <visitorckw@gmail.com>
Suggested-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ching-Chun (Jim) Huang <jserv@ccns.ncku.edu.tw>
Cc: Jonathan Corbet <corbet@lwn.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:19 -08:00
Pratyush Mittal
947363f19e lib/rhashtable: fix the typo for preemptible
Fix the spelling of the mis-spelled word

Link: https://lkml.kernel.org/r/20241123102929.11660-1-pratyushmittal@gmail.com
Signed-off-by: Pratyush Mittal <pratyushmittal@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:20:15 -08:00
Luiz Capitulino
655b9a0a16 mm: alloc_pages_bulk: rename API
The previous commit removed the page_list argument from
alloc_pages_bulk_noprof() along with the alloc_pages_bulk_list() function.

Now that only the *_array() flavour of the API remains, we can do the
following renaming (along with the _noprof() ones):

  alloc_pages_bulk_array -> alloc_pages_bulk
  alloc_pages_bulk_array_mempolicy -> alloc_pages_bulk_mempolicy
  alloc_pages_bulk_array_node -> alloc_pages_bulk_node

Link: https://lkml.kernel.org/r/275a3bbc0be20fbe9002297d60045e67ab3d4ada.1734991165.git.luizcap@redhat.com
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:19:43 -08:00
Guo Weikang
7f9387a7d5 mm/memblock: add memblock_alloc_or_panic interface
Before SLUB initialization, various subsystems used memblock_alloc to
allocate memory.  In most cases, when memory allocation fails, an
immediate panic is required.  To simplify this behavior and reduce
repetitive checks, introduce `memblock_alloc_or_panic`.  This function
ensures that memory allocation failures result in a panic automatically,
improving code readability and consistency across subsystems that require
this behavior.

Link: https://lkml.kernel.org/r/20250102072528.650926-1-guoweikang.kernel@gmail.com
Signed-off-by: Guo Weikang <guoweikang.kernel@gmail.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>	[m68k]
Reviewed-by: Alexander Gordeev <agordeev@linux.ibm.com>	[s390]
Cc: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:19:23 -08:00
Maninder Singh
15fdc79c59 lib/list_debug.c: add object information in case of invalid object
As of now during link list corruption it prints about cluprit address and
its wrong value, but sometime it is not enough to catch the actual issue
point.

If it prints allocation and free path of that corrupted node, it will be a
lot easier to find and fix the issues.

Adding the same information when data mismatch is found in link list
debug data:

[   14.243055]  slab kmalloc-32 start ffff0000cda19320 data offset 32 pointer offset 8 size 32 allocated at add_to_list+0x28/0xb0
[   14.245259]     __kmalloc_cache_noprof+0x1c4/0x358
[   14.245572]     add_to_list+0x28/0xb0
...
[   14.248632]     do_el0_svc_compat+0x1c/0x34
[   14.249018]     el0_svc_compat+0x2c/0x80
[   14.249244]  Free path:
[   14.249410]     kfree+0x24c/0x2f0
[   14.249724]     do_force_corruption+0xbc/0x100
...
[   14.252266]     el0_svc_common.constprop.0+0x40/0xe0
[   14.252540]     do_el0_svc_compat+0x1c/0x34
[   14.252763]     el0_svc_compat+0x2c/0x80
[   14.253071] ------------[ cut here ]------------
[   14.253303] list_del corruption. next->prev should be ffff0000cda192a8, but was 6b6b6b6b6b6b6b6b. (next=ffff0000cda19348)
[   14.254255] WARNING: CPU: 3 PID: 84 at lib/list_debug.c:65 __list_del_entry_valid_or_report+0x158/0x164

Moved prototype of mem_dump_obj() to bug.h, as mm.h can not be included in
bug.h.

Link: https://lkml.kernel.org/r/20241230101043.53773-1-maninder1.s@samsung.com
Signed-off-by: Maninder Singh <maninder1.s@samsung.com>
Acked-by: Jan Kara <jack@suse.cz>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Marco Elver <elver@google.com>
Cc: Rohit Thapliyal <r.thapliyal@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:19:22 -08:00
Suren Baghdasaryan
f4b45be6de alloc_tag: avoid current->alloc_tag manipulations when profiling is disabled
When memory allocation profiling is disabled there is no need to update
current->alloc_tag and these manipulations add unnecessary overhead.  Fix
the overhead by skipping these extra updates.

Link: https://lkml.kernel.org/r/20241226211639.1357704-1-surenb@google.com
Fixes: b951aaff5035 ("mm: enable page allocation tagging")
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Cc: David Wang <00107082@163.com>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Zhenhua Huang <quic_zhenhuah@quicinc.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:19:18 -08:00
Liam R. Howlett
20e827315c test_maple_tree: test exhausted upper limit of mtree_alloc_cyclic()
When the upper bound of the search is exhausted, the maple state may be
returned in an error state of -EBUSY.  This means maple state needs to be
reset before the second search in mas_alloc_cylic() to ensure the search
happens.  This test ensures the issue is not recreated.

Link: https://lkml.kernel.org/r/20241216190113.1226145-3-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Reviewed-by: Yang Erkun <yangerkun@huawei.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Chuck Lever <chuck.lever@oracle.com> says:
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:19:00 -08:00
Wei Yang
539c297495 maple_tree: only root node could be deficient
Each level's rightmost node should have (max == ULONG_MAX).  This means
current validation skips the right most node on each level.

Only the root node may be below the minimum data threshold.

Link: https://lkml.kernel.org/r/20241113031616.10530-4-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:18:34 -08:00
Wei Yang
e1d4306b05 maple_tree: add a test check deficient node
Add a test to assert when resulting a deficient node on splitting.

We can achieve this by build a tree with two nodes. With the left
node with consecutive data from 0 and leave some room for the final
insert to locate in left node. And the right node a full node to force
the split happens on the left node.

Link: https://lkml.kernel.org/r/20241113031616.10530-3-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:18:34 -08:00
Wei Yang
b86414802d maple_tree: simplify split calculation
Patch series "simplify split calculation", v3.


This patch (of 3):

The current calculation for splitting nodes tries to enforce a minimum
span on the leaf nodes.  This code is complex and never worked correctly
to begin with, due to the min value being passed as 0 for all leaves.

The calculation should just split the data as equally as possible
between the new nodes.  Note that b_end will be one more than the data,
so the left side is still favoured in the calculation.

The current code may also lead to a deficient node by not leaving enough
data for the right side of the split. This issue is also addressed with
the split calculation change.

[Liam.Howlett@Oracle.com: rephrase the change log]
Link: https://lkml.kernel.org/r/20241113031616.10530-1-richard.weiyang@gmail.com
Link: https://lkml.kernel.org/r/20241113031616.10530-2-richard.weiyang@gmail.com
Fixes: 54a611b60590 ("Maple Tree: add new data structure")
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:18:34 -08:00
Wei Yang
acb751ac74 maple_tree: we don't set offset to MAPLE_NODE_SLOTS on error
When mas_anode_descend() not find gap, it sets -EBUSY instead of setting
offset to MAPLE_NODE_SLOTS.

Link: https://lkml.kernel.org/r/20241116014805.11547-4-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:18:32 -08:00
Wei Yang
c1f6da2598 maple_tree: not possible to be a root node after loop
Empty tree and single entry tree is handled else whether, so the maple
tree here must be a tree with nodes.

If the height is 1 and we found the gap, it will jump to *done* since it
is also a leaf.

If the height is more than one, and there may be an available range, we
will descend the tree, which is not root anymore.

If there is no available range, we will set error and return.

This means the check for root node here is not necessary.

Link: https://lkml.kernel.org/r/20241116014805.11547-3-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:18:32 -08:00
Wei Yang
672e5ad505 maple_tree: index has been checked to be smaller than pivot
Patch series "mas_anode_descend() related cleanup".

Some cleanup related to mas_anode_descend().


This patch (of 3):

At the beginning of loop, it has checked the range is in lower bounds.

Link: https://lkml.kernel.org/r/20241116014805.11547-1-richard.weiyang@gmail.com
Link: https://lkml.kernel.org/r/20241116014805.11547-2-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:18:31 -08:00
Wei Yang
9a61f8bff8 maple_tree: use mas_next_slot() directly
The loop condition makes sure (mas.last < max), so we can directly use
mas_next_slot() here.

Since no other use of mas_next_entry(), it is removed.

Link: https://lkml.kernel.org/r/20241125024156.26093-1-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:18:22 -08:00
Suren Baghdasaryan
d1c2903592 alloc_tag: skip pgalloc_tag_swap if profiling is disabled
When memory allocation profiling is disabled, there is no need to swap
allocation tags during migration.  Skip it to avoid unnecessary overhead.

Once I added these checks, the overhead of the mode when memory profiling
is enabled but turned off went down by about 50%.

I ran more comprehensive testing on Pixel 6 on Big, Medium and Little cores:

                 Overhead before fixes            Overhead after fixes
                 slab alloc      page alloc          slab alloc      page alloc
Big               6.21%           5.32%                3.31%          4.93%
Medium            4.51%           5.05%                3.79%          4.39%
Little            7.62%           1.82%                6.68%          1.02%

This is an allocation microbenchmark doing allocations in a tight loop.
Not a really realistic scenario and useful only to make performance
comparisons.

Link: https://lkml.kernel.org/r/20241226211639.1357704-2-surenb@google.com
Fixes: e0a955bf7f61 ("mm/codetag: add pgalloc_tag_copy()")
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Cc: David Wang <00107082@163.com>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Zhenhua Huang <quic_zhenhuah@quicinc.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-11 23:17:24 -08:00
Linus Torvalds
fa47906ff3 vsnprintf: fix up kerneldoc for argument name changes
Stephen Rothwell reports that I missed fixing up the documentation when
the argument names changed in commit 938df695e98d ("vsprintf: associate
the format state with the format pointer"), resulting in htmldoc
warnings like

  lib/vsprintf.c:2760: warning: Function parameter or struct member 'fmt_str' not described in 'vsnprintf'
  lib/vsprintf.c:2760: warning: Excess function parameter 'fmt' description in 'vsnprintf'
  ...

which I didn't notice because the doc build takes longer than the whole
"real" kernel build for me, so I never bother (and judging by the other
warnings, pretty much nobody else does either).

I guess the bigger issues won't be fixed until the doc build is much
faster (narrator: "That isn's in the cards") but at least linux-next
finds the new cases.

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Fixes: 938df695e98d ("vsprintf: associate the format state with the format pointer")
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2025-01-06 06:31:11 -08:00
Yang Erkun
1fd8bc7cd8 maple_tree: reload mas before the second call for mas_empty_area
Change the LONG_MAX in simple_offset_add to 1024, and do latter:

[root@fedora ~]# mkdir /tmp/dir
[root@fedora ~]# for i in {1..1024}; do touch /tmp/dir/$i; done
touch: cannot touch '/tmp/dir/1024': Device or resource busy
[root@fedora ~]# rm /tmp/dir/123
[root@fedora ~]# touch /tmp/dir/1024
[root@fedora ~]# rm /tmp/dir/100
[root@fedora ~]# touch /tmp/dir/1025
touch: cannot touch '/tmp/dir/1025': Device or resource busy

After we delete file 100, actually this is a empty entry, but the latter
create failed unexpected.

mas_alloc_cyclic has two chance to find empty entry.  First find the entry
with range range_lo and range_hi, if no empty entry exist, and range_lo >
min, retry find with range min and range_hi.  However, the first call
mas_empty_area may mark mas as EBUSY, and the second call for
mas_empty_area will return false directly.  Fix this by reload mas before
second call for mas_empty_area.

[Liam.Howlett@Oracle.com: fix mas_alloc_cyclic() second search]
  Link: https://lore.kernel.org/all/20241216060600.287B4C4CED0@smtp.kernel.org/
  Link: https://lkml.kernel.org/r/20241216190113.1226145-2-Liam.Howlett@oracle.com
Link: https://lkml.kernel.org/r/20241214093005.72284-1-yangerkun@huaweicloud.com
Fixes: 9b6713cc7522 ("maple_tree: Add mtree_alloc_cyclic()")
Signed-off-by: Yang Erkun <yangerkun@huawei.com>
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Chuck Lever <chuck.lever@oracle.com> says:
Cc: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-12-30 17:59:07 -08:00
Linus Torvalds
4c538044ee vsprintf: don't make the 'binary' version pack small integer arguments
The strange vbin_printf / bstr_printf interface used to save one- and
two-byte printf numerical arguments into their packed format.

That's more than a bit strange since the argument buffer is supposed to
be an array of 'u32' words, and it's also very different from how the
source of the data (varargs) work - which always do the normal integer
type conversions, so 'char' and 'short' are always passed as int-sized
anyway.

This odd packing causes extra code complexity, and it really isn't worth
it, since the space savings are simply not there: it only happens for
formats like '%hd' (short) and '%hhd' (char), which are very rare
indeed.

In fact, the only other user of this interface seems to be the bpf
helper code (bpf_bprintf_prepare()), and Alexei points out that that
case doesn't support those truncated integer formatting options at all
in the first place.

As a result, bpf_bprintf_prepare() doesn't need any changes for this,
and TRACE_BPRINT uses 'vbin_printf()' -> 'bstr_printf()' for the
formatting and hopefully doesn't expose the odd packing any other way
(knock wood).

Link: https://lore.kernel.org/all/CAADnVQJy65oOubjxM-378O3wDfhuwg8TGa9hc-cTv6NmmUSykQ@mail.gmail.com/
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-23 11:52:34 -08:00
Linus Torvalds
8d4826cc8a vsnprintf: collapse the number format state into one single state
We'll squirrel away the size of the number in 'struct fmt' instead.

We have two fairly separate state structures: the 'decode state' is in
'struct fmt', while the 'printout format' is in 'printf_spec'.  Both
structures are small enough to pass around in registers even across
function boundaries (ie two words), even on 32-bit machines.

The goal here is to avoid the case statements on the format states,
which generate either deep conditionals or jump tables, while also
keeping the state size manageable.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-23 11:18:36 -08:00
Linus Torvalds
2b76e39fca vsnprintf: mark the indirect width and precision cases unlikely
Make the format_decode() code generation easier to look at by getting
the strange and unlikely cases out of line.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-23 11:18:35 -08:00
Linus Torvalds
f372b2256a vsnprintf: inline skip_atoi() again
At some point skip_atoi() had been marked 'noinline_for_stack', but it
turns out that this is now a pessimization, and not inlining it actually
results in a stack frame in format decoding due to the call and thus
hurts stack usage rather than helping.

With the simplistic atoi function inlined, the format decoding now needs
no frame at all.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-23 11:18:35 -08:00
Linus Torvalds
614d13462d vsprintf: deal with format specifiers with a lookup table
We did the flags as an array earlier, they had simpler rules.  The final
format specifiers are a bit more complex since they have more fields to
deal with, and we want to handle the length modifiers at the same time.
But like the flags, we're better off just making it a data-driven table
rather than some case statement.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-23 11:18:35 -08:00
Linus Torvalds
312f48b2e2 vsprintf: deal with format flags with a simple lookup table
Rather than a case statement, just look up the printf format flags
(justification, zero-padding etc) using a small table.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-23 11:18:35 -08:00
Linus Torvalds
938df695e9 vsprintf: associate the format state with the format pointer
The vsnprintf() code is written as a state machine as it walks the
format pointer, but for various historical reasons the state is oddly
named and was encoded as the 'type' field in the 'struct printf_spec'.

That naming came from the fact that the states used to not just encode
the state of the state machine, but also the various integer types that
would then be printed out.

Let's make the state machine more obvious, and actually call it 'state',
and associate it with the format pointer itself, rather than the
'printf_spec' that contains the currently decoded formatting specs.

This also removes the bit packing from printf_spec, which makes it much
easier on the compiler.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-23 11:18:35 -08:00
Linus Torvalds
9e0e6d8a32 vsprintf: fix calling convention for format_decode()
Every single caller wants to know what the next format location is, but
instead the function returned the length of the processed part and so
every single return statement in the format_decode() function was
instead subtracting the start of the format string.

The callers that that did want to know the length (in addition to the
end of the format processing) already had to save off the start of the
format string anyway.  So this was all just doing extra processing both
on the caller and callee sides.

Just change the calling convention to return the end of the format
processing, making everything simpler (and preparing for yet more
simplification to come).

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-23 11:18:35 -08:00
Linus Torvalds
03d23941bf vsprintf: avoid nested switch statement on same variable
Now that we have simplified the number format types, the top-level
switch table can easily just handle all the remaining cases, and we
don't need to have a case statement with a conditional on the same
expression as the switch statement.

We do want to fall through to the common 'number()' case, but that's
trivially done by making the other case statements use 'continue'
instead of 'break'.  They are just looping back to the top, after all.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-23 11:18:35 -08:00
Linus Torvalds
be503db4d0 vsprintf: simplify number handling
Instead of dealing with all the different special types (size_t,
unsigned char, ptrdiff_t..) just deal with the size of the integer type
and the sign.

This avoids a lot of unnecessary case statements, and the games we play
with the value of the 'SIGN' flags value

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-23 11:18:35 -08:00
Suren Baghdasaryan
e269b5d291 alloc_tag: fix module allocation tags populated area calculation
vm_module_tags_populate() calculation of the populated area assumes that
area starts at a page boundary and therefore when new pages are allocation,
the end of the area is page-aligned as well. If the start of the area is
not page-aligned then allocating a page and incrementing the end of the
area by PAGE_SIZE leads to an area at the end but within the area boundary
which is not populated. Accessing this are will lead to a kernel panic.
Fix the calculation by down-aligning the start of the area and using that
as the location allocated pages are mapped to.

[gehao@kylinos.cn: fix vm_module_tags_populate's KASAN poisoning logic]
  Link: https://lkml.kernel.org/r/20241205170528.81000-1-hao.ge@linux.dev
[gehao@kylinos.cn: fix panic when CONFIG_KASAN enabled and CONFIG_KASAN_VMALLOC not enabled]
  Link: https://lkml.kernel.org/r/20241212072126.134572-1-hao.ge@linux.dev
Link: https://lkml.kernel.org/r/20241130001423.1114965-1-surenb@google.com
Fixes: 0f9b685626da ("alloc_tag: populate memory for module tags as needed")
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202411132111.6a221562-lkp@intel.com
Acked-by: Yu Zhao <yuzhao@google.com>
Tested-by: Adrian Huang <ahuang12@lenovo.com> 
Cc: David Wang <00107082@163.com>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Sourav Panda <souravpanda@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-12-18 19:04:46 -08:00
David Wang
640a603943 mm/codetag: clear tags before swap
When CONFIG_MEM_ALLOC_PROFILING_DEBUG is set, kernel WARN would be
triggered when calling __alloc_tag_ref_set() during swap:

	alloc_tag was not cleared (got tag for mm/filemap.c:1951)
	WARNING: CPU: 0 PID: 816 at ./include/linux/alloc_tag.h...

Clear code tags before swap can fix the warning. And this patch also fix
a potential invalid address dereference in alloc_tag_add_check() when
CONFIG_MEM_ALLOC_PROFILING_DEBUG is set and ref->ct is CODETAG_EMPTY,
which is defined as ((void *)1).

Link: https://lkml.kernel.org/r/20241213013332.89910-1-00107082@163.com
Fixes: 51f43d5d82ed ("mm/codetag: swap tags when migrate pages")
Signed-off-by: David Wang <00107082@163.com>
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202412112227.df61ebb-lkp@intel.com
Acked-by: Suren Baghdasaryan <surenb@google.com>
Cc: Kent Overstreet <kent.overstreet@linux.dev>
Cc: Yu Zhao <yuzhao@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2024-12-18 19:04:46 -08:00
Linus Torvalds
7cb1b46631 - Remove if_not_guard() as it is generating incorrect code
- Fix the initialization of the fake lockdep_map for the first locked
   ww_mutex
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEzv7L6UO9uDPlPSfHEsHwGGHeVUoFAmdWw3gACgkQEsHwGGHe
 VUp7sQ//VM2HI27bo5iODm7bu7IhiFfAQkAcRcivBbyj0oUW57etF5l+dBLeC+kr
 sTTHg2iBqMaMcM1tzGEdJqfNs7dK+MWhn5STA2LXsVTBq72tbhAtLeX5oONS/V8h
 BeAPARB5pl5L9rQwy+FZ0Q9/XuFNhbMQhX4JxZn+FB3cg3PImC8Hjm0aKlNznB9e
 JRPhjLohoAoQ0Ty5zXJQWhShh6WLkAmec4OaBzQ+W4wGMLoNd80HgM/3ufTxDTbV
 I1+snOrOq9OR/00OUkuIFQyB50r/4/B5wNtTtlI2UsIjf1YuB03BeIApykc7ARB4
 zdZGFvliNdPJjSyY/ein7gqsI+JirpPd5oSaICsAJ5nNGgL+lxfSfw2cv+S3jWz7
 AJxiFweSQS4fVH/6FxpQ+5e0louqf5f0FgQy17X1vL0imnaZoUKDaHGJd+VGR4hE
 Rpee3/rqh5dFEODxMR87GjEcU+j/LZ/fWzAi/ciZ168YOA8LXeSC0ROvfsy3KhhD
 Eall0M96yqnhEDBZ3KacHguldLQpYhsMUxz8wVmICqPoYYZ31OvqhwmF1K13a1eL
 iKoPoFc2A4bdpBd144myZt8NkKpOAI4CPp74wDN1baj0HGKGuif477M5tiCWay3D
 bQ6FV1dhSLPjd/0GmdfZGJlIS89keS43cNilnCJYOMDalUcybw4=
 =SGkT
 -----END PGP SIGNATURE-----

Merge tag 'locking_urgent_for_v6.13_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull locking fixes from Borislav Petkov:

 - Remove if_not_guard() as it is generating incorrect code

 - Fix the initialization of the fake lockdep_map for the first locked
   ww_mutex

* tag 'locking_urgent_for_v6.13_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  headers/cleanup.h: Remove the if_not_guard() facility
  locking/ww_mutex: Fix ww_mutex dummy lockdep map selftest warnings
2024-12-09 10:34:41 -08:00