linux-stable/mm/kasan
Linus Torvalds e06635e26c slab updates for 6.13
-----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCAAdFiEEe7vIQRWZI0iWSE3xu+CwddJFiJoFAmdERvEACgkQu+CwddJF
 iJre6Af9EBMVQiWJrmoMOjbGLqLgmZzSXRNxR862WGn4D/wesA1HmSlWgEn54hgc
 GIYIeD++v4JaIRNH0yZqb2UBSKjF/rYPDkKstnqgFaVakLoDrwkkwV2n3Gk5BEgR
 m/SzLGgoDWKR65I/oMpL6e2KrMOfMfjpB31qiVvdlaQd2Nv/5rw+gUVylxhNIZEH
 W11N3IC+e9hmgT3ZBpTmHeqNrlXE1+USWPrp/HV05Ndz6yf97JnP4Wr9f9pcyN3R
 aflLHR38+Q9cCfO7y8wNqtYvIV/kbqgdaqD76frSgalC4Lmz9+L+TZ2NuENCPoGj
 Xdbip2z+iffWhvqM+qooOLVxR0XqTA==
 =Sepb
 -----END PGP SIGNATURE-----

Merge tag 'slab-for-6.13-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab

Pull slab updates from Vlastimil Babka:

 - Add new slab_strict_numa boot parameter to enforce per-object memory
   policies on top of slab folio policies, for systems where saving cost
   of remote accesses is more important than minimizing slab allocation
   overhead (Christoph Lameter)

 - Fix for freeptr_offset alignment check being too strict for m68k
   (Geert Uytterhoeven)

 - krealloc() fixes for not violating __GFP_ZERO guarantees on
   krealloc() when slub_debug (redzone and object tracking) is enabled
   (Feng Tang)

 - Fix a memory leak in case sysfs registration fails for a slab cache,
   and also no longer fail to create the cache in that case (Hyeonggon
   Yoo)

 - Fix handling of detected consistency problems (due to buggy slab
   user) with slub_debug enabled, so that it does not cause further list
   corruption bugs (yuan.gao)

 - Code cleanup and kerneldocs polishing (Zhen Lei, Vlastimil Babka)

* tag 'slab-for-6.13-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab:
  slab: Fix too strict alignment check in create_cache()
  mm/slab: Allow cache creation to proceed even if sysfs registration fails
  mm/slub: Avoid list corruption when removing a slab from the full list
  mm/slub, kunit: Add testcase for krealloc redzone and zeroing
  mm/slub: Improve redzone check and zeroing for krealloc()
  mm/slub: Consider kfence case for get_orig_size()
  SLUB: Add support for per object memory policies
  mm, slab: add kerneldocs for common SLAB_ flags
  mm/slab: remove duplicate check in create_cache()
  mm/slub: Move krealloc() and related code to slub.c
  mm/kasan: Don't store metadata inside kmalloc object when slub_debug_orig_size is on
2024-11-25 16:51:24 -08:00
..
common.c slub: Introduce CONFIG_SLUB_RCU_DEBUG 2024-08-27 14:12:51 +02:00
generic.c mm/kasan: Don't store metadata inside kmalloc object when slub_debug_orig_size is on 2024-10-29 10:43:23 +01:00
hw_tags.c kasan: use EXPORT_SYMBOL_IF_KUNIT to export symbols 2024-11-11 13:09:43 -08:00
init.c mm: define general function pXd_init() 2024-11-11 17:22:27 -08:00
kasan_test_c.c kasan: add kunit tests for kmalloc_track_caller, kmalloc_node_track_caller 2024-11-11 17:22:25 -08:00
kasan_test_rust.rs kasan: rust: Add KASAN smoke test via UAF 2024-09-16 18:04:37 +02:00
kasan.h kasan: delete CONFIG_KASAN_MODULE_TEST 2024-11-11 00:26:44 -08:00
Makefile kasan: migrate copy_user_test to kunit 2024-11-11 00:26:44 -08:00
quarantine.c kasan: revert eviction of stack traces in generic mode 2024-02-23 17:27:12 -08:00
report_generic.c kasan: stop leaking stack trace handles 2024-01-05 10:17:45 -08:00
report_hw_tags.c kasan: use internal prototypes matching gcc-13 builtins 2023-06-09 16:25:19 -07:00
report_sw_tags.c kasan: use internal prototypes matching gcc-13 builtins 2023-06-09 16:25:19 -07:00
report_tags.c kasan: simplify kasan_complete_mode_report_info for tag-based modes 2023-12-29 11:58:47 -08:00
report.c kasan: use EXPORT_SYMBOL_IF_KUNIT to export symbols 2024-11-11 13:09:43 -08:00
shadow.c mm/vmalloc: combine all TLB flush operations of KASAN shadow virtual address into one operation 2024-11-05 16:56:21 -08:00
sw_tags.c kasan: use internal prototypes matching gcc-13 builtins 2023-06-09 16:25:19 -07:00
tags.c kasan: simplify saving extra info into tracks 2023-12-29 11:58:46 -08:00