mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
synced 2024-12-29 17:25:38 +00:00
mm, treewide: rename MAX_ORDER to MAX_PAGE_ORDER
commit 23baf831a3
("mm, treewide: redefine MAX_ORDER sanely") has
changed the definition of MAX_ORDER to be inclusive. This has caused
issues with code that was not yet upstream and depended on the previous
definition.
To draw attention to the altered meaning of the define, rename MAX_ORDER
to MAX_PAGE_ORDER.
Link: https://lkml.kernel.org/r/20231228144704.14033-2-kirill.shutemov@linux.intel.com
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
fd37721803
commit
5e0a760b44
@ -193,7 +193,7 @@ from this.
|
|||||||
--------------------------------
|
--------------------------------
|
||||||
|
|
||||||
Free areas descriptor. User-space tools use this value to iterate the
|
Free areas descriptor. User-space tools use this value to iterate the
|
||||||
free_area ranges. MAX_ORDER is used by the zone buddy allocator.
|
free_area ranges. NR_PAGE_ORDERS is used by the zone buddy allocator.
|
||||||
|
|
||||||
prb
|
prb
|
||||||
---
|
---
|
||||||
|
@ -970,17 +970,17 @@
|
|||||||
buddy allocator. Bigger value increase the probability
|
buddy allocator. Bigger value increase the probability
|
||||||
of catching random memory corruption, but reduce the
|
of catching random memory corruption, but reduce the
|
||||||
amount of memory for normal system use. The maximum
|
amount of memory for normal system use. The maximum
|
||||||
possible value is MAX_ORDER/2. Setting this parameter
|
possible value is MAX_PAGE_ORDER/2. Setting this
|
||||||
to 1 or 2 should be enough to identify most random
|
parameter to 1 or 2 should be enough to identify most
|
||||||
memory corruption problems caused by bugs in kernel or
|
random memory corruption problems caused by bugs in
|
||||||
driver code when a CPU writes to (or reads from) a
|
kernel or driver code when a CPU writes to (or reads
|
||||||
random memory location. Note that there exists a class
|
from) a random memory location. Note that there exists
|
||||||
of memory corruptions problems caused by buggy H/W or
|
a class of memory corruptions problems caused by buggy
|
||||||
F/W or by drivers badly programming DMA (basically when
|
H/W or F/W or by drivers badly programming DMA
|
||||||
memory is written at bus level and the CPU MMU is
|
(basically when memory is written at bus level and the
|
||||||
bypassed) which are not detectable by
|
CPU MMU is bypassed) which are not detectable by
|
||||||
CONFIG_DEBUG_PAGEALLOC, hence this option will not help
|
CONFIG_DEBUG_PAGEALLOC, hence this option will not
|
||||||
tracking down these problems.
|
help tracking down these problems.
|
||||||
|
|
||||||
debug_pagealloc=
|
debug_pagealloc=
|
||||||
[KNL] When CONFIG_DEBUG_PAGEALLOC is set, this parameter
|
[KNL] When CONFIG_DEBUG_PAGEALLOC is set, this parameter
|
||||||
@ -4136,7 +4136,7 @@
|
|||||||
[KNL] Minimal page reporting order
|
[KNL] Minimal page reporting order
|
||||||
Format: <integer>
|
Format: <integer>
|
||||||
Adjust the minimal page reporting order. The page
|
Adjust the minimal page reporting order. The page
|
||||||
reporting is disabled when it exceeds MAX_ORDER.
|
reporting is disabled when it exceeds MAX_PAGE_ORDER.
|
||||||
|
|
||||||
panic= [KNL] Kernel behaviour on panic: delay <timeout>
|
panic= [KNL] Kernel behaviour on panic: delay <timeout>
|
||||||
timeout > 0: seconds before rebooting
|
timeout > 0: seconds before rebooting
|
||||||
|
@ -263,20 +263,20 @@ the name indicates, this function allocates pages of memory, and the second
|
|||||||
argument is "order" or a power of two number of pages, that is
|
argument is "order" or a power of two number of pages, that is
|
||||||
(for PAGE_SIZE == 4096) order=0 ==> 4096 bytes, order=1 ==> 8192 bytes,
|
(for PAGE_SIZE == 4096) order=0 ==> 4096 bytes, order=1 ==> 8192 bytes,
|
||||||
order=2 ==> 16384 bytes, etc. The maximum size of a
|
order=2 ==> 16384 bytes, etc. The maximum size of a
|
||||||
region allocated by __get_free_pages is determined by the MAX_ORDER macro. More
|
region allocated by __get_free_pages is determined by the MAX_PAGE_ORDER macro.
|
||||||
precisely the limit can be calculated as::
|
More precisely the limit can be calculated as::
|
||||||
|
|
||||||
PAGE_SIZE << MAX_ORDER
|
PAGE_SIZE << MAX_PAGE_ORDER
|
||||||
|
|
||||||
In a i386 architecture PAGE_SIZE is 4096 bytes
|
In a i386 architecture PAGE_SIZE is 4096 bytes
|
||||||
In a 2.4/i386 kernel MAX_ORDER is 10
|
In a 2.4/i386 kernel MAX_PAGE_ORDER is 10
|
||||||
In a 2.6/i386 kernel MAX_ORDER is 11
|
In a 2.6/i386 kernel MAX_PAGE_ORDER is 11
|
||||||
|
|
||||||
So get_free_pages can allocate as much as 4MB or 8MB in a 2.4/2.6 kernel
|
So get_free_pages can allocate as much as 4MB or 8MB in a 2.4/2.6 kernel
|
||||||
respectively, with an i386 architecture.
|
respectively, with an i386 architecture.
|
||||||
|
|
||||||
User space programs can include /usr/include/sys/user.h and
|
User space programs can include /usr/include/sys/user.h and
|
||||||
/usr/include/linux/mmzone.h to get PAGE_SIZE MAX_ORDER declarations.
|
/usr/include/linux/mmzone.h to get PAGE_SIZE MAX_PAGE_ORDER declarations.
|
||||||
|
|
||||||
The pagesize can also be determined dynamically with the getpagesize (2)
|
The pagesize can also be determined dynamically with the getpagesize (2)
|
||||||
system call.
|
system call.
|
||||||
@ -324,7 +324,7 @@ Definitions:
|
|||||||
(see /proc/slabinfo)
|
(see /proc/slabinfo)
|
||||||
<pointer size> depends on the architecture -- ``sizeof(void *)``
|
<pointer size> depends on the architecture -- ``sizeof(void *)``
|
||||||
<page size> depends on the architecture -- PAGE_SIZE or getpagesize (2)
|
<page size> depends on the architecture -- PAGE_SIZE or getpagesize (2)
|
||||||
<max-order> is the value defined with MAX_ORDER
|
<max-order> is the value defined with MAX_PAGE_ORDER
|
||||||
<frame size> it's an upper bound of frame's capture size (more on this later)
|
<frame size> it's an upper bound of frame's capture size (more on this later)
|
||||||
============== ================================================================
|
============== ================================================================
|
||||||
|
|
||||||
|
@ -1362,7 +1362,7 @@ config ARCH_FORCE_MAX_ORDER
|
|||||||
default "10"
|
default "10"
|
||||||
help
|
help
|
||||||
The kernel page allocator limits the size of maximal physically
|
The kernel page allocator limits the size of maximal physically
|
||||||
contiguous allocations. The limit is called MAX_ORDER and it
|
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
|
||||||
defines the maximal power of two of number of pages that can be
|
defines the maximal power of two of number of pages that can be
|
||||||
allocated as a single contiguous block. This option allows
|
allocated as a single contiguous block. This option allows
|
||||||
overriding the default setting when ability to allocate very
|
overriding the default setting when ability to allocate very
|
||||||
|
@ -1520,15 +1520,15 @@ config XEN
|
|||||||
|
|
||||||
# include/linux/mmzone.h requires the following to be true:
|
# include/linux/mmzone.h requires the following to be true:
|
||||||
#
|
#
|
||||||
# MAX_ORDER + PAGE_SHIFT <= SECTION_SIZE_BITS
|
# MAX_PAGE_ORDER + PAGE_SHIFT <= SECTION_SIZE_BITS
|
||||||
#
|
#
|
||||||
# so the maximum value of MAX_ORDER is SECTION_SIZE_BITS - PAGE_SHIFT:
|
# so the maximum value of MAX_PAGE_ORDER is SECTION_SIZE_BITS - PAGE_SHIFT:
|
||||||
#
|
#
|
||||||
# | SECTION_SIZE_BITS | PAGE_SHIFT | max MAX_ORDER | default MAX_ORDER |
|
# | SECTION_SIZE_BITS | PAGE_SHIFT | max MAX_PAGE_ORDER | default MAX_PAGE_ORDER |
|
||||||
# ----+-------------------+--------------+-----------------+--------------------+
|
# ----+-------------------+--------------+----------------------+-------------------------+
|
||||||
# 4K | 27 | 12 | 15 | 10 |
|
# 4K | 27 | 12 | 15 | 10 |
|
||||||
# 16K | 27 | 14 | 13 | 11 |
|
# 16K | 27 | 14 | 13 | 11 |
|
||||||
# 64K | 29 | 16 | 13 | 13 |
|
# 64K | 29 | 16 | 13 | 13 |
|
||||||
config ARCH_FORCE_MAX_ORDER
|
config ARCH_FORCE_MAX_ORDER
|
||||||
int
|
int
|
||||||
default "13" if ARM64_64K_PAGES
|
default "13" if ARM64_64K_PAGES
|
||||||
@ -1536,16 +1536,16 @@ config ARCH_FORCE_MAX_ORDER
|
|||||||
default "10"
|
default "10"
|
||||||
help
|
help
|
||||||
The kernel page allocator limits the size of maximal physically
|
The kernel page allocator limits the size of maximal physically
|
||||||
contiguous allocations. The limit is called MAX_ORDER and it
|
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
|
||||||
defines the maximal power of two of number of pages that can be
|
defines the maximal power of two of number of pages that can be
|
||||||
allocated as a single contiguous block. This option allows
|
allocated as a single contiguous block. This option allows
|
||||||
overriding the default setting when ability to allocate very
|
overriding the default setting when ability to allocate very
|
||||||
large blocks of physically contiguous memory is required.
|
large blocks of physically contiguous memory is required.
|
||||||
|
|
||||||
The maximal size of allocation cannot exceed the size of the
|
The maximal size of allocation cannot exceed the size of the
|
||||||
section, so the value of MAX_ORDER should satisfy
|
section, so the value of MAX_PAGE_ORDER should satisfy
|
||||||
|
|
||||||
MAX_ORDER + PAGE_SHIFT <= SECTION_SIZE_BITS
|
MAX_PAGE_ORDER + PAGE_SHIFT <= SECTION_SIZE_BITS
|
||||||
|
|
||||||
Don't change if unsure.
|
Don't change if unsure.
|
||||||
|
|
||||||
|
@ -10,7 +10,7 @@
|
|||||||
/*
|
/*
|
||||||
* Section size must be at least 512MB for 64K base
|
* Section size must be at least 512MB for 64K base
|
||||||
* page size config. Otherwise it will be less than
|
* page size config. Otherwise it will be less than
|
||||||
* MAX_ORDER and the build process will fail.
|
* MAX_PAGE_ORDER and the build process will fail.
|
||||||
*/
|
*/
|
||||||
#ifdef CONFIG_ARM64_64K_PAGES
|
#ifdef CONFIG_ARM64_64K_PAGES
|
||||||
#define SECTION_SIZE_BITS 29
|
#define SECTION_SIZE_BITS 29
|
||||||
|
@ -228,7 +228,8 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages,
|
|||||||
int i;
|
int i;
|
||||||
|
|
||||||
hyp_spin_lock_init(&pool->lock);
|
hyp_spin_lock_init(&pool->lock);
|
||||||
pool->max_order = min(MAX_ORDER, get_order(nr_pages << PAGE_SHIFT));
|
pool->max_order = min(MAX_PAGE_ORDER,
|
||||||
|
get_order(nr_pages << PAGE_SHIFT));
|
||||||
for (i = 0; i <= pool->max_order; i++)
|
for (i = 0; i <= pool->max_order; i++)
|
||||||
INIT_LIST_HEAD(&pool->free_area[i]);
|
INIT_LIST_HEAD(&pool->free_area[i]);
|
||||||
pool->range_start = phys;
|
pool->range_start = phys;
|
||||||
|
@ -51,7 +51,7 @@ void __init arm64_hugetlb_cma_reserve(void)
|
|||||||
* page allocator. Just warn if there is any change
|
* page allocator. Just warn if there is any change
|
||||||
* breaking this assumption.
|
* breaking this assumption.
|
||||||
*/
|
*/
|
||||||
WARN_ON(order <= MAX_ORDER);
|
WARN_ON(order <= MAX_PAGE_ORDER);
|
||||||
hugetlb_cma_reserve(order);
|
hugetlb_cma_reserve(order);
|
||||||
}
|
}
|
||||||
#endif /* CONFIG_CMA */
|
#endif /* CONFIG_CMA */
|
||||||
|
@ -402,7 +402,7 @@ config ARCH_FORCE_MAX_ORDER
|
|||||||
default "10"
|
default "10"
|
||||||
help
|
help
|
||||||
The kernel page allocator limits the size of maximal physically
|
The kernel page allocator limits the size of maximal physically
|
||||||
contiguous allocations. The limit is called MAX_ORDER and it
|
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
|
||||||
defines the maximal power of two of number of pages that can be
|
defines the maximal power of two of number of pages that can be
|
||||||
allocated as a single contiguous block. This option allows
|
allocated as a single contiguous block. This option allows
|
||||||
overriding the default setting when ability to allocate very
|
overriding the default setting when ability to allocate very
|
||||||
|
@ -50,7 +50,7 @@ config ARCH_FORCE_MAX_ORDER
|
|||||||
default "10"
|
default "10"
|
||||||
help
|
help
|
||||||
The kernel page allocator limits the size of maximal physically
|
The kernel page allocator limits the size of maximal physically
|
||||||
contiguous allocations. The limit is called MAX_ORDER and it
|
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
|
||||||
defines the maximal power of two of number of pages that can be
|
defines the maximal power of two of number of pages that can be
|
||||||
allocated as a single contiguous block. This option allows
|
allocated as a single contiguous block. This option allows
|
||||||
overriding the default setting when ability to allocate very
|
overriding the default setting when ability to allocate very
|
||||||
|
@ -915,7 +915,7 @@ config ARCH_FORCE_MAX_ORDER
|
|||||||
default "10"
|
default "10"
|
||||||
help
|
help
|
||||||
The kernel page allocator limits the size of maximal physically
|
The kernel page allocator limits the size of maximal physically
|
||||||
contiguous allocations. The limit is called MAX_ORDER and it
|
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
|
||||||
defines the maximal power of two of number of pages that can be
|
defines the maximal power of two of number of pages that can be
|
||||||
allocated as a single contiguous block. This option allows
|
allocated as a single contiguous block. This option allows
|
||||||
overriding the default setting when ability to allocate very
|
overriding the default setting when ability to allocate very
|
||||||
|
@ -97,7 +97,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua,
|
|||||||
}
|
}
|
||||||
|
|
||||||
mmap_read_lock(mm);
|
mmap_read_lock(mm);
|
||||||
chunk = (1UL << (PAGE_SHIFT + MAX_ORDER)) /
|
chunk = (1UL << (PAGE_SHIFT + MAX_PAGE_ORDER)) /
|
||||||
sizeof(struct vm_area_struct *);
|
sizeof(struct vm_area_struct *);
|
||||||
chunk = min(chunk, entries);
|
chunk = min(chunk, entries);
|
||||||
for (entry = 0; entry < entries; entry += chunk) {
|
for (entry = 0; entry < entries; entry += chunk) {
|
||||||
|
@ -615,7 +615,7 @@ void __init gigantic_hugetlb_cma_reserve(void)
|
|||||||
order = mmu_psize_to_shift(MMU_PAGE_16G) - PAGE_SHIFT;
|
order = mmu_psize_to_shift(MMU_PAGE_16G) - PAGE_SHIFT;
|
||||||
|
|
||||||
if (order) {
|
if (order) {
|
||||||
VM_WARN_ON(order <= MAX_ORDER);
|
VM_WARN_ON(order <= MAX_PAGE_ORDER);
|
||||||
hugetlb_cma_reserve(order);
|
hugetlb_cma_reserve(order);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1389,7 +1389,7 @@ static long pnv_pci_ioda2_setup_default_config(struct pnv_ioda_pe *pe)
|
|||||||
* DMA window can be larger than available memory, which will
|
* DMA window can be larger than available memory, which will
|
||||||
* cause errors later.
|
* cause errors later.
|
||||||
*/
|
*/
|
||||||
const u64 maxblock = 1UL << (PAGE_SHIFT + MAX_ORDER);
|
const u64 maxblock = 1UL << (PAGE_SHIFT + MAX_PAGE_ORDER);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We create the default window as big as we can. The constraint is
|
* We create the default window as big as we can. The constraint is
|
||||||
|
@ -26,7 +26,7 @@ config ARCH_FORCE_MAX_ORDER
|
|||||||
default "10"
|
default "10"
|
||||||
help
|
help
|
||||||
The kernel page allocator limits the size of maximal physically
|
The kernel page allocator limits the size of maximal physically
|
||||||
contiguous allocations. The limit is called MAX_ORDER and it
|
contiguous allocations. The limit is called MAX_PAGE:_ORDER and it
|
||||||
defines the maximal power of two of number of pages that can be
|
defines the maximal power of two of number of pages that can be
|
||||||
allocated as a single contiguous block. This option allows
|
allocated as a single contiguous block. This option allows
|
||||||
overriding the default setting when ability to allocate very
|
overriding the default setting when ability to allocate very
|
||||||
|
@ -277,7 +277,7 @@ config ARCH_FORCE_MAX_ORDER
|
|||||||
default "12"
|
default "12"
|
||||||
help
|
help
|
||||||
The kernel page allocator limits the size of maximal physically
|
The kernel page allocator limits the size of maximal physically
|
||||||
contiguous allocations. The limit is called MAX_ORDER and it
|
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
|
||||||
defines the maximal power of two of number of pages that can be
|
defines the maximal power of two of number of pages that can be
|
||||||
allocated as a single contiguous block. This option allows
|
allocated as a single contiguous block. This option allows
|
||||||
overriding the default setting when ability to allocate very
|
overriding the default setting when ability to allocate very
|
||||||
|
@ -194,7 +194,7 @@ static void *dma_4v_alloc_coherent(struct device *dev, size_t size,
|
|||||||
|
|
||||||
size = IO_PAGE_ALIGN(size);
|
size = IO_PAGE_ALIGN(size);
|
||||||
order = get_order(size);
|
order = get_order(size);
|
||||||
if (unlikely(order > MAX_ORDER))
|
if (unlikely(order > MAX_PAGE_ORDER))
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
npages = size >> IO_PAGE_SHIFT;
|
npages = size >> IO_PAGE_SHIFT;
|
||||||
|
@ -402,8 +402,8 @@ void tsb_grow(struct mm_struct *mm, unsigned long tsb_index, unsigned long rss)
|
|||||||
unsigned long new_rss_limit;
|
unsigned long new_rss_limit;
|
||||||
gfp_t gfp_flags;
|
gfp_t gfp_flags;
|
||||||
|
|
||||||
if (max_tsb_size > PAGE_SIZE << MAX_ORDER)
|
if (max_tsb_size > PAGE_SIZE << MAX_PAGE_ORDER)
|
||||||
max_tsb_size = PAGE_SIZE << MAX_ORDER;
|
max_tsb_size = PAGE_SIZE << MAX_PAGE_ORDER;
|
||||||
|
|
||||||
new_cache_index = 0;
|
new_cache_index = 0;
|
||||||
for (new_size = 8192; new_size < max_tsb_size; new_size <<= 1UL) {
|
for (new_size = 8192; new_size < max_tsb_size; new_size <<= 1UL) {
|
||||||
|
@ -373,10 +373,10 @@ int __init linux_main(int argc, char **argv)
|
|||||||
max_physmem = TASK_SIZE - uml_physmem - iomem_size - MIN_VMALLOC;
|
max_physmem = TASK_SIZE - uml_physmem - iomem_size - MIN_VMALLOC;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Zones have to begin on a 1 << MAX_ORDER page boundary,
|
* Zones have to begin on a 1 << MAX_PAGE_ORDER page boundary,
|
||||||
* so this makes sure that's true for highmem
|
* so this makes sure that's true for highmem
|
||||||
*/
|
*/
|
||||||
max_physmem &= ~((1 << (PAGE_SHIFT + MAX_ORDER)) - 1);
|
max_physmem &= ~((1 << (PAGE_SHIFT + MAX_PAGE_ORDER)) - 1);
|
||||||
if (physmem_size + iomem_size > max_physmem) {
|
if (physmem_size + iomem_size > max_physmem) {
|
||||||
highmem = physmem_size + iomem_size - max_physmem;
|
highmem = physmem_size + iomem_size - max_physmem;
|
||||||
physmem_size -= highmem;
|
physmem_size -= highmem;
|
||||||
|
@ -793,7 +793,7 @@ config ARCH_FORCE_MAX_ORDER
|
|||||||
default "10"
|
default "10"
|
||||||
help
|
help
|
||||||
The kernel page allocator limits the size of maximal physically
|
The kernel page allocator limits the size of maximal physically
|
||||||
contiguous allocations. The limit is called MAX_ORDER and it
|
contiguous allocations. The limit is called MAX_PAGE_ORDER and it
|
||||||
defines the maximal power of two of number of pages that can be
|
defines the maximal power of two of number of pages that can be
|
||||||
allocated as a single contiguous block. This option allows
|
allocated as a single contiguous block. This option allows
|
||||||
overriding the default setting when ability to allocate very
|
overriding the default setting when ability to allocate very
|
||||||
|
@ -451,7 +451,7 @@ static int create_sgt(struct qaic_device *qdev, struct sg_table **sgt_out, u64 s
|
|||||||
* later
|
* later
|
||||||
*/
|
*/
|
||||||
buf_extra = (PAGE_SIZE - size % PAGE_SIZE) % PAGE_SIZE;
|
buf_extra = (PAGE_SIZE - size % PAGE_SIZE) % PAGE_SIZE;
|
||||||
max_order = min(MAX_ORDER - 1, get_order(size));
|
max_order = min(MAX_PAGE_ORDER - 1, get_order(size));
|
||||||
} else {
|
} else {
|
||||||
/* allocate a single page for book keeping */
|
/* allocate a single page for book keeping */
|
||||||
nr_pages = 1;
|
nr_pages = 1;
|
||||||
|
@ -226,8 +226,8 @@ static ssize_t regmap_read_debugfs(struct regmap *map, unsigned int from,
|
|||||||
if (*ppos < 0 || !count)
|
if (*ppos < 0 || !count)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (count > (PAGE_SIZE << MAX_ORDER))
|
if (count > (PAGE_SIZE << MAX_PAGE_ORDER))
|
||||||
count = PAGE_SIZE << MAX_ORDER;
|
count = PAGE_SIZE << MAX_PAGE_ORDER;
|
||||||
|
|
||||||
buf = kmalloc(count, GFP_KERNEL);
|
buf = kmalloc(count, GFP_KERNEL);
|
||||||
if (!buf)
|
if (!buf)
|
||||||
@ -373,8 +373,8 @@ static ssize_t regmap_reg_ranges_read_file(struct file *file,
|
|||||||
if (*ppos < 0 || !count)
|
if (*ppos < 0 || !count)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (count > (PAGE_SIZE << MAX_ORDER))
|
if (count > (PAGE_SIZE << MAX_PAGE_ORDER))
|
||||||
count = PAGE_SIZE << MAX_ORDER;
|
count = PAGE_SIZE << MAX_PAGE_ORDER;
|
||||||
|
|
||||||
buf = kmalloc(count, GFP_KERNEL);
|
buf = kmalloc(count, GFP_KERNEL);
|
||||||
if (!buf)
|
if (!buf)
|
||||||
|
@ -3079,7 +3079,7 @@ static void raw_cmd_free(struct floppy_raw_cmd **ptr)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#define MAX_LEN (1UL << MAX_ORDER << PAGE_SHIFT)
|
#define MAX_LEN (1UL << MAX_PAGE_ORDER << PAGE_SHIFT)
|
||||||
|
|
||||||
static int raw_cmd_copyin(int cmd, void __user *param,
|
static int raw_cmd_copyin(int cmd, void __user *param,
|
||||||
struct floppy_raw_cmd **rcmd)
|
struct floppy_raw_cmd **rcmd)
|
||||||
|
@ -906,7 +906,7 @@ static int sev_ioctl_do_get_id2(struct sev_issue_cmd *argp)
|
|||||||
/*
|
/*
|
||||||
* The length of the ID shouldn't be assumed by software since
|
* The length of the ID shouldn't be assumed by software since
|
||||||
* it may change in the future. The allocation size is limited
|
* it may change in the future. The allocation size is limited
|
||||||
* to 1 << (PAGE_SHIFT + MAX_ORDER) by the page allocator.
|
* to 1 << (PAGE_SHIFT + MAX_PAGE_ORDER) by the page allocator.
|
||||||
* If the allocation fails, simply return ENOMEM rather than
|
* If the allocation fails, simply return ENOMEM rather than
|
||||||
* warning in the kernel log.
|
* warning in the kernel log.
|
||||||
*/
|
*/
|
||||||
|
@ -70,11 +70,11 @@ struct hisi_acc_sgl_pool *hisi_acc_create_sgl_pool(struct device *dev,
|
|||||||
HISI_ACC_SGL_ALIGN_SIZE);
|
HISI_ACC_SGL_ALIGN_SIZE);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* the pool may allocate a block of memory of size PAGE_SIZE * 2^MAX_ORDER,
|
* the pool may allocate a block of memory of size PAGE_SIZE * 2^MAX_PAGE_ORDER,
|
||||||
* block size may exceed 2^31 on ia64, so the max of block size is 2^31
|
* block size may exceed 2^31 on ia64, so the max of block size is 2^31
|
||||||
*/
|
*/
|
||||||
block_size = 1 << (PAGE_SHIFT + MAX_ORDER < 32 ?
|
block_size = 1 << (PAGE_SHIFT + MAX_PAGE_ORDER < 32 ?
|
||||||
PAGE_SHIFT + MAX_ORDER : 31);
|
PAGE_SHIFT + MAX_PAGE_ORDER : 31);
|
||||||
sgl_num_per_block = block_size / sgl_size;
|
sgl_num_per_block = block_size / sgl_size;
|
||||||
block_num = count / sgl_num_per_block;
|
block_num = count / sgl_num_per_block;
|
||||||
remain_sgl = count % sgl_num_per_block;
|
remain_sgl = count % sgl_num_per_block;
|
||||||
|
@ -36,7 +36,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
|
|||||||
struct sg_table *st;
|
struct sg_table *st;
|
||||||
struct scatterlist *sg;
|
struct scatterlist *sg;
|
||||||
unsigned int npages; /* restricted by sg_alloc_table */
|
unsigned int npages; /* restricted by sg_alloc_table */
|
||||||
int max_order = MAX_ORDER;
|
int max_order = MAX_PAGE_ORDER;
|
||||||
unsigned int max_segment;
|
unsigned int max_segment;
|
||||||
gfp_t gfp;
|
gfp_t gfp;
|
||||||
|
|
||||||
|
@ -115,7 +115,7 @@ static int get_huge_pages(struct drm_i915_gem_object *obj)
|
|||||||
do {
|
do {
|
||||||
struct page *page;
|
struct page *page;
|
||||||
|
|
||||||
GEM_BUG_ON(order > MAX_ORDER);
|
GEM_BUG_ON(order > MAX_PAGE_ORDER);
|
||||||
page = alloc_pages(GFP | __GFP_ZERO, order);
|
page = alloc_pages(GFP | __GFP_ZERO, order);
|
||||||
if (!page)
|
if (!page)
|
||||||
goto err;
|
goto err;
|
||||||
|
@ -109,7 +109,7 @@ static const struct ttm_pool_test_case ttm_pool_basic_cases[] = {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
.description = "Above the allocation limit",
|
.description = "Above the allocation limit",
|
||||||
.order = MAX_ORDER + 1,
|
.order = MAX_PAGE_ORDER + 1,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
.description = "One page, with coherent DMA mappings enabled",
|
.description = "One page, with coherent DMA mappings enabled",
|
||||||
@ -118,7 +118,7 @@ static const struct ttm_pool_test_case ttm_pool_basic_cases[] = {
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
.description = "Above the allocation limit, with coherent DMA mappings enabled",
|
.description = "Above the allocation limit, with coherent DMA mappings enabled",
|
||||||
.order = MAX_ORDER + 1,
|
.order = MAX_PAGE_ORDER + 1,
|
||||||
.use_dma_alloc = true,
|
.use_dma_alloc = true,
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
@ -165,7 +165,7 @@ static void ttm_pool_alloc_basic(struct kunit *test)
|
|||||||
fst_page = tt->pages[0];
|
fst_page = tt->pages[0];
|
||||||
last_page = tt->pages[tt->num_pages - 1];
|
last_page = tt->pages[tt->num_pages - 1];
|
||||||
|
|
||||||
if (params->order <= MAX_ORDER) {
|
if (params->order <= MAX_PAGE_ORDER) {
|
||||||
if (params->use_dma_alloc) {
|
if (params->use_dma_alloc) {
|
||||||
KUNIT_ASSERT_NOT_NULL(test, (void *)fst_page->private);
|
KUNIT_ASSERT_NOT_NULL(test, (void *)fst_page->private);
|
||||||
KUNIT_ASSERT_NOT_NULL(test, (void *)last_page->private);
|
KUNIT_ASSERT_NOT_NULL(test, (void *)last_page->private);
|
||||||
@ -182,7 +182,7 @@ static void ttm_pool_alloc_basic(struct kunit *test)
|
|||||||
* order 0 blocks
|
* order 0 blocks
|
||||||
*/
|
*/
|
||||||
KUNIT_ASSERT_EQ(test, fst_page->private,
|
KUNIT_ASSERT_EQ(test, fst_page->private,
|
||||||
min_t(unsigned int, MAX_ORDER,
|
min_t(unsigned int, MAX_PAGE_ORDER,
|
||||||
params->order));
|
params->order));
|
||||||
KUNIT_ASSERT_EQ(test, last_page->private, 0);
|
KUNIT_ASSERT_EQ(test, last_page->private, 0);
|
||||||
}
|
}
|
||||||
|
@ -447,7 +447,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt,
|
|||||||
else
|
else
|
||||||
gfp_flags |= GFP_HIGHUSER;
|
gfp_flags |= GFP_HIGHUSER;
|
||||||
|
|
||||||
for (order = min_t(unsigned int, MAX_ORDER, __fls(num_pages));
|
for (order = min_t(unsigned int, MAX_PAGE_ORDER, __fls(num_pages));
|
||||||
num_pages;
|
num_pages;
|
||||||
order = min_t(unsigned int, order, __fls(num_pages))) {
|
order = min_t(unsigned int, order, __fls(num_pages))) {
|
||||||
struct ttm_pool_type *pt;
|
struct ttm_pool_type *pt;
|
||||||
|
@ -188,7 +188,7 @@
|
|||||||
#ifdef CONFIG_CMA_ALIGNMENT
|
#ifdef CONFIG_CMA_ALIGNMENT
|
||||||
#define Q_MAX_SZ_SHIFT (PAGE_SHIFT + CONFIG_CMA_ALIGNMENT)
|
#define Q_MAX_SZ_SHIFT (PAGE_SHIFT + CONFIG_CMA_ALIGNMENT)
|
||||||
#else
|
#else
|
||||||
#define Q_MAX_SZ_SHIFT (PAGE_SHIFT + MAX_ORDER)
|
#define Q_MAX_SZ_SHIFT (PAGE_SHIFT + MAX_PAGE_ORDER)
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -884,7 +884,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev,
|
|||||||
struct page **pages;
|
struct page **pages;
|
||||||
unsigned int i = 0, nid = dev_to_node(dev);
|
unsigned int i = 0, nid = dev_to_node(dev);
|
||||||
|
|
||||||
order_mask &= GENMASK(MAX_ORDER, 0);
|
order_mask &= GENMASK(MAX_PAGE_ORDER, 0);
|
||||||
if (!order_mask)
|
if (!order_mask)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
|
@ -2465,8 +2465,8 @@ static bool its_parse_indirect_baser(struct its_node *its,
|
|||||||
* feature is not supported by hardware.
|
* feature is not supported by hardware.
|
||||||
*/
|
*/
|
||||||
new_order = max_t(u32, get_order(esz << ids), new_order);
|
new_order = max_t(u32, get_order(esz << ids), new_order);
|
||||||
if (new_order > MAX_ORDER) {
|
if (new_order > MAX_PAGE_ORDER) {
|
||||||
new_order = MAX_ORDER;
|
new_order = MAX_PAGE_ORDER;
|
||||||
ids = ilog2(PAGE_ORDER_TO_SIZE(new_order) / (int)esz);
|
ids = ilog2(PAGE_ORDER_TO_SIZE(new_order) / (int)esz);
|
||||||
pr_warn("ITS@%pa: %s Table too large, reduce ids %llu->%u\n",
|
pr_warn("ITS@%pa: %s Table too large, reduce ids %llu->%u\n",
|
||||||
&its->phys_base, its_base_type_string[type],
|
&its->phys_base, its_base_type_string[type],
|
||||||
|
@ -1170,7 +1170,7 @@ static void __cache_size_refresh(void)
|
|||||||
* If the allocation may fail we use __get_free_pages. Memory fragmentation
|
* If the allocation may fail we use __get_free_pages. Memory fragmentation
|
||||||
* won't have a fatal effect here, but it just causes flushes of some other
|
* won't have a fatal effect here, but it just causes flushes of some other
|
||||||
* buffers and more I/O will be performed. Don't use __get_free_pages if it
|
* buffers and more I/O will be performed. Don't use __get_free_pages if it
|
||||||
* always fails (i.e. order > MAX_ORDER).
|
* always fails (i.e. order > MAX_PAGE_ORDER).
|
||||||
*
|
*
|
||||||
* If the allocation shouldn't fail we use __vmalloc. This is only for the
|
* If the allocation shouldn't fail we use __vmalloc. This is only for the
|
||||||
* initial reserve allocation, so there's no risk of wasting all vmalloc
|
* initial reserve allocation, so there's no risk of wasting all vmalloc
|
||||||
|
@ -1673,7 +1673,7 @@ static struct bio *crypt_alloc_buffer(struct dm_crypt_io *io, unsigned int size)
|
|||||||
unsigned int nr_iovecs = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
|
unsigned int nr_iovecs = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
|
||||||
gfp_t gfp_mask = GFP_NOWAIT | __GFP_HIGHMEM;
|
gfp_t gfp_mask = GFP_NOWAIT | __GFP_HIGHMEM;
|
||||||
unsigned int remaining_size;
|
unsigned int remaining_size;
|
||||||
unsigned int order = MAX_ORDER;
|
unsigned int order = MAX_PAGE_ORDER;
|
||||||
|
|
||||||
retry:
|
retry:
|
||||||
if (unlikely(gfp_mask & __GFP_DIRECT_RECLAIM))
|
if (unlikely(gfp_mask & __GFP_DIRECT_RECLAIM))
|
||||||
|
@ -434,7 +434,7 @@ static struct bio *clone_bio(struct dm_target *ti, struct flakey_c *fc, struct b
|
|||||||
|
|
||||||
remaining_size = size;
|
remaining_size = size;
|
||||||
|
|
||||||
order = MAX_ORDER;
|
order = MAX_PAGE_ORDER;
|
||||||
while (remaining_size) {
|
while (remaining_size) {
|
||||||
struct page *pages;
|
struct page *pages;
|
||||||
unsigned size_to_add, to_copy;
|
unsigned size_to_add, to_copy;
|
||||||
|
@ -443,7 +443,7 @@ static int genwqe_mmap(struct file *filp, struct vm_area_struct *vma)
|
|||||||
if (vsize == 0)
|
if (vsize == 0)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
|
|
||||||
if (get_order(vsize) > MAX_ORDER)
|
if (get_order(vsize) > MAX_PAGE_ORDER)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
|
|
||||||
dma_map = kzalloc(sizeof(struct dma_mapping), GFP_KERNEL);
|
dma_map = kzalloc(sizeof(struct dma_mapping), GFP_KERNEL);
|
||||||
|
@ -210,7 +210,7 @@ u32 genwqe_crc32(u8 *buff, size_t len, u32 init)
|
|||||||
void *__genwqe_alloc_consistent(struct genwqe_dev *cd, size_t size,
|
void *__genwqe_alloc_consistent(struct genwqe_dev *cd, size_t size,
|
||||||
dma_addr_t *dma_handle)
|
dma_addr_t *dma_handle)
|
||||||
{
|
{
|
||||||
if (get_order(size) > MAX_ORDER)
|
if (get_order(size) > MAX_PAGE_ORDER)
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
return dma_alloc_coherent(&cd->pci_dev->dev, size, dma_handle,
|
return dma_alloc_coherent(&cd->pci_dev->dev, size, dma_handle,
|
||||||
@ -308,7 +308,7 @@ int genwqe_alloc_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl,
|
|||||||
sgl->write = write;
|
sgl->write = write;
|
||||||
sgl->sgl_size = genwqe_sgl_size(sgl->nr_pages);
|
sgl->sgl_size = genwqe_sgl_size(sgl->nr_pages);
|
||||||
|
|
||||||
if (get_order(sgl->sgl_size) > MAX_ORDER) {
|
if (get_order(sgl->sgl_size) > MAX_PAGE_ORDER) {
|
||||||
dev_err(&pci_dev->dev,
|
dev_err(&pci_dev->dev,
|
||||||
"[%s] err: too much memory requested!\n", __func__);
|
"[%s] err: too much memory requested!\n", __func__);
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -1041,7 +1041,7 @@ static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring)
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
order = get_order(alloc_size);
|
order = get_order(alloc_size);
|
||||||
if (order > MAX_ORDER) {
|
if (order > MAX_PAGE_ORDER) {
|
||||||
if (net_ratelimit())
|
if (net_ratelimit())
|
||||||
dev_warn(ring_to_dev(ring), "failed to allocate tx spare buffer, exceed to max order\n");
|
dev_warn(ring_to_dev(ring), "failed to allocate tx spare buffer, exceed to max order\n");
|
||||||
return;
|
return;
|
||||||
|
@ -48,7 +48,7 @@
|
|||||||
* of 4096 jumbo frames (MTU=9000) we will need about 9K*4K = 36MB plus
|
* of 4096 jumbo frames (MTU=9000) we will need about 9K*4K = 36MB plus
|
||||||
* some padding.
|
* some padding.
|
||||||
*
|
*
|
||||||
* But the size of a single DMA region is limited by MAX_ORDER in the
|
* But the size of a single DMA region is limited by MAX_PAGE_ORDER in the
|
||||||
* kernel (about 16MB currently). To support say 4K Jumbo frames, we
|
* kernel (about 16MB currently). To support say 4K Jumbo frames, we
|
||||||
* use a set of LTBs (struct ltb_set) per pool.
|
* use a set of LTBs (struct ltb_set) per pool.
|
||||||
*
|
*
|
||||||
@ -75,7 +75,7 @@
|
|||||||
* pool for the 4MB. Thus the 16 Rx and Tx queues require 32 * 5 = 160
|
* pool for the 4MB. Thus the 16 Rx and Tx queues require 32 * 5 = 160
|
||||||
* plus 16 for the TSO pools for a total of 176 LTB mappings per VNIC.
|
* plus 16 for the TSO pools for a total of 176 LTB mappings per VNIC.
|
||||||
*/
|
*/
|
||||||
#define IBMVNIC_ONE_LTB_MAX ((u32)((1 << MAX_ORDER) * PAGE_SIZE))
|
#define IBMVNIC_ONE_LTB_MAX ((u32)((1 << MAX_PAGE_ORDER) * PAGE_SIZE))
|
||||||
#define IBMVNIC_ONE_LTB_SIZE min((u32)(8 << 20), IBMVNIC_ONE_LTB_MAX)
|
#define IBMVNIC_ONE_LTB_SIZE min((u32)(8 << 20), IBMVNIC_ONE_LTB_MAX)
|
||||||
#define IBMVNIC_LTB_SET_SIZE (38 << 20)
|
#define IBMVNIC_LTB_SET_SIZE (38 << 20)
|
||||||
|
|
||||||
|
@ -927,8 +927,8 @@ static phys_addr_t hvfb_get_phymem(struct hv_device *hdev,
|
|||||||
if (request_size == 0)
|
if (request_size == 0)
|
||||||
return -1;
|
return -1;
|
||||||
|
|
||||||
if (order <= MAX_ORDER) {
|
if (order <= MAX_PAGE_ORDER) {
|
||||||
/* Call alloc_pages if the size is less than 2^MAX_ORDER */
|
/* Call alloc_pages if the size is less than 2^MAX_PAGE_ORDER */
|
||||||
page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
|
page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
|
||||||
if (!page)
|
if (!page)
|
||||||
return -1;
|
return -1;
|
||||||
@ -958,7 +958,7 @@ static void hvfb_release_phymem(struct hv_device *hdev,
|
|||||||
{
|
{
|
||||||
unsigned int order = get_order(size);
|
unsigned int order = get_order(size);
|
||||||
|
|
||||||
if (order <= MAX_ORDER)
|
if (order <= MAX_PAGE_ORDER)
|
||||||
__free_pages(pfn_to_page(paddr >> PAGE_SHIFT), order);
|
__free_pages(pfn_to_page(paddr >> PAGE_SHIFT), order);
|
||||||
else
|
else
|
||||||
dma_free_coherent(&hdev->device,
|
dma_free_coherent(&hdev->device,
|
||||||
|
@ -197,7 +197,7 @@ static int vmlfb_alloc_vram(struct vml_info *vinfo,
|
|||||||
va = &vinfo->vram[i];
|
va = &vinfo->vram[i];
|
||||||
order = 0;
|
order = 0;
|
||||||
|
|
||||||
while (requested > (PAGE_SIZE << order) && order <= MAX_ORDER)
|
while (requested > (PAGE_SIZE << order) && order <= MAX_PAGE_ORDER)
|
||||||
order++;
|
order++;
|
||||||
|
|
||||||
err = vmlfb_alloc_vram_area(va, order, 0);
|
err = vmlfb_alloc_vram_area(va, order, 0);
|
||||||
|
@ -33,7 +33,7 @@
|
|||||||
#define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \
|
#define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \
|
||||||
__GFP_NOMEMALLOC)
|
__GFP_NOMEMALLOC)
|
||||||
/* The order of free page blocks to report to host */
|
/* The order of free page blocks to report to host */
|
||||||
#define VIRTIO_BALLOON_HINT_BLOCK_ORDER MAX_ORDER
|
#define VIRTIO_BALLOON_HINT_BLOCK_ORDER MAX_PAGE_ORDER
|
||||||
/* The size of a free page block in bytes */
|
/* The size of a free page block in bytes */
|
||||||
#define VIRTIO_BALLOON_HINT_BLOCK_BYTES \
|
#define VIRTIO_BALLOON_HINT_BLOCK_BYTES \
|
||||||
(1 << (VIRTIO_BALLOON_HINT_BLOCK_ORDER + PAGE_SHIFT))
|
(1 << (VIRTIO_BALLOON_HINT_BLOCK_ORDER + PAGE_SHIFT))
|
||||||
|
@ -1154,13 +1154,13 @@ static void virtio_mem_clear_fake_offline(unsigned long pfn,
|
|||||||
*/
|
*/
|
||||||
static void virtio_mem_fake_online(unsigned long pfn, unsigned long nr_pages)
|
static void virtio_mem_fake_online(unsigned long pfn, unsigned long nr_pages)
|
||||||
{
|
{
|
||||||
unsigned long order = MAX_ORDER;
|
unsigned long order = MAX_PAGE_ORDER;
|
||||||
unsigned long i;
|
unsigned long i;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We might get called for ranges that don't cover properly aligned
|
* We might get called for ranges that don't cover properly aligned
|
||||||
* MAX_ORDER pages; however, we can only online properly aligned
|
* MAX_PAGE_ORDER pages; however, we can only online properly aligned
|
||||||
* pages with an order of MAX_ORDER at maximum.
|
* pages with an order of MAX_PAGE_ORDER at maximum.
|
||||||
*/
|
*/
|
||||||
while (!IS_ALIGNED(pfn | nr_pages, 1 << order))
|
while (!IS_ALIGNED(pfn | nr_pages, 1 << order))
|
||||||
order--;
|
order--;
|
||||||
@ -1280,7 +1280,7 @@ static void virtio_mem_online_page(struct virtio_mem *vm,
|
|||||||
bool do_online;
|
bool do_online;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* We can get called with any order up to MAX_ORDER. If our subblock
|
* We can get called with any order up to MAX_PAGE_ORDER. If our subblock
|
||||||
* size is smaller than that and we have a mixture of plugged and
|
* size is smaller than that and we have a mixture of plugged and
|
||||||
* unplugged subblocks within such a page, we have to process in
|
* unplugged subblocks within such a page, we have to process in
|
||||||
* smaller granularity. In that case we'll adjust the order exactly once
|
* smaller granularity. In that case we'll adjust the order exactly once
|
||||||
|
@ -70,7 +70,7 @@ int ramfs_nommu_expand_for_mapping(struct inode *inode, size_t newsize)
|
|||||||
|
|
||||||
/* make various checks */
|
/* make various checks */
|
||||||
order = get_order(newsize);
|
order = get_order(newsize);
|
||||||
if (unlikely(order > MAX_ORDER))
|
if (unlikely(order > MAX_PAGE_ORDER))
|
||||||
return -EFBIG;
|
return -EFBIG;
|
||||||
|
|
||||||
ret = inode_newsize_ok(inode, newsize);
|
ret = inode_newsize_ok(inode, newsize);
|
||||||
|
@ -829,7 +829,7 @@ static inline unsigned huge_page_shift(struct hstate *h)
|
|||||||
|
|
||||||
static inline bool hstate_is_gigantic(struct hstate *h)
|
static inline bool hstate_is_gigantic(struct hstate *h)
|
||||||
{
|
{
|
||||||
return huge_page_order(h) > MAX_ORDER;
|
return huge_page_order(h) > MAX_PAGE_ORDER;
|
||||||
}
|
}
|
||||||
|
|
||||||
static inline unsigned int pages_per_huge_page(const struct hstate *h)
|
static inline unsigned int pages_per_huge_page(const struct hstate *h)
|
||||||
|
@ -27,15 +27,15 @@
|
|||||||
|
|
||||||
/* Free memory management - zoned buddy allocator. */
|
/* Free memory management - zoned buddy allocator. */
|
||||||
#ifndef CONFIG_ARCH_FORCE_MAX_ORDER
|
#ifndef CONFIG_ARCH_FORCE_MAX_ORDER
|
||||||
#define MAX_ORDER 10
|
#define MAX_PAGE_ORDER 10
|
||||||
#else
|
#else
|
||||||
#define MAX_ORDER CONFIG_ARCH_FORCE_MAX_ORDER
|
#define MAX_PAGE_ORDER CONFIG_ARCH_FORCE_MAX_ORDER
|
||||||
#endif
|
#endif
|
||||||
#define MAX_ORDER_NR_PAGES (1 << MAX_ORDER)
|
#define MAX_ORDER_NR_PAGES (1 << MAX_PAGE_ORDER)
|
||||||
|
|
||||||
#define IS_MAX_ORDER_ALIGNED(pfn) IS_ALIGNED(pfn, MAX_ORDER_NR_PAGES)
|
#define IS_MAX_ORDER_ALIGNED(pfn) IS_ALIGNED(pfn, MAX_ORDER_NR_PAGES)
|
||||||
|
|
||||||
#define NR_PAGE_ORDERS (MAX_ORDER + 1)
|
#define NR_PAGE_ORDERS (MAX_PAGE_ORDER + 1)
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* PAGE_ALLOC_COSTLY_ORDER is the order at which allocations are deemed
|
* PAGE_ALLOC_COSTLY_ORDER is the order at which allocations are deemed
|
||||||
@ -938,7 +938,7 @@ struct zone {
|
|||||||
struct free_area free_area[NR_PAGE_ORDERS];
|
struct free_area free_area[NR_PAGE_ORDERS];
|
||||||
|
|
||||||
#ifdef CONFIG_UNACCEPTED_MEMORY
|
#ifdef CONFIG_UNACCEPTED_MEMORY
|
||||||
/* Pages to be accepted. All pages on the list are MAX_ORDER */
|
/* Pages to be accepted. All pages on the list are MAX_PAGE_ORDER */
|
||||||
struct list_head unaccepted_pages;
|
struct list_head unaccepted_pages;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
@ -1748,8 +1748,8 @@ static inline bool movable_only_nodes(nodemask_t *nodes)
|
|||||||
#define SECTION_BLOCKFLAGS_BITS \
|
#define SECTION_BLOCKFLAGS_BITS \
|
||||||
((1UL << (PFN_SECTION_SHIFT - pageblock_order)) * NR_PAGEBLOCK_BITS)
|
((1UL << (PFN_SECTION_SHIFT - pageblock_order)) * NR_PAGEBLOCK_BITS)
|
||||||
|
|
||||||
#if (MAX_ORDER + PAGE_SHIFT) > SECTION_SIZE_BITS
|
#if (MAX_PAGE_ORDER + PAGE_SHIFT) > SECTION_SIZE_BITS
|
||||||
#error Allocator MAX_ORDER exceeds SECTION_SIZE
|
#error Allocator MAX_PAGE_ORDER exceeds SECTION_SIZE
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
static inline unsigned long pfn_to_section_nr(unsigned long pfn)
|
static inline unsigned long pfn_to_section_nr(unsigned long pfn)
|
||||||
|
@ -41,14 +41,14 @@ extern unsigned int pageblock_order;
|
|||||||
* Huge pages are a constant size, but don't exceed the maximum allocation
|
* Huge pages are a constant size, but don't exceed the maximum allocation
|
||||||
* granularity.
|
* granularity.
|
||||||
*/
|
*/
|
||||||
#define pageblock_order min_t(unsigned int, HUGETLB_PAGE_ORDER, MAX_ORDER)
|
#define pageblock_order min_t(unsigned int, HUGETLB_PAGE_ORDER, MAX_PAGE_ORDER)
|
||||||
|
|
||||||
#endif /* CONFIG_HUGETLB_PAGE_SIZE_VARIABLE */
|
#endif /* CONFIG_HUGETLB_PAGE_SIZE_VARIABLE */
|
||||||
|
|
||||||
#else /* CONFIG_HUGETLB_PAGE */
|
#else /* CONFIG_HUGETLB_PAGE */
|
||||||
|
|
||||||
/* If huge pages are not used, group by MAX_ORDER_NR_PAGES */
|
/* If huge pages are not used, group by MAX_ORDER_NR_PAGES */
|
||||||
#define pageblock_order MAX_ORDER
|
#define pageblock_order MAX_PAGE_ORDER
|
||||||
|
|
||||||
#endif /* CONFIG_HUGETLB_PAGE */
|
#endif /* CONFIG_HUGETLB_PAGE */
|
||||||
|
|
||||||
|
@ -308,7 +308,7 @@ static inline unsigned int arch_slab_minalign(void)
|
|||||||
* (PAGE_SIZE*2). Larger requests are passed to the page allocator.
|
* (PAGE_SIZE*2). Larger requests are passed to the page allocator.
|
||||||
*/
|
*/
|
||||||
#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1)
|
#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1)
|
||||||
#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT)
|
#define KMALLOC_SHIFT_MAX (MAX_PAGE_ORDER + PAGE_SHIFT)
|
||||||
#ifndef KMALLOC_SHIFT_LOW
|
#ifndef KMALLOC_SHIFT_LOW
|
||||||
#define KMALLOC_SHIFT_LOW 5
|
#define KMALLOC_SHIFT_LOW 5
|
||||||
#endif
|
#endif
|
||||||
@ -316,7 +316,7 @@ static inline unsigned int arch_slab_minalign(void)
|
|||||||
|
|
||||||
#ifdef CONFIG_SLUB
|
#ifdef CONFIG_SLUB
|
||||||
#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1)
|
#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1)
|
||||||
#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT)
|
#define KMALLOC_SHIFT_MAX (MAX_PAGE_ORDER + PAGE_SHIFT)
|
||||||
#ifndef KMALLOC_SHIFT_LOW
|
#ifndef KMALLOC_SHIFT_LOW
|
||||||
#define KMALLOC_SHIFT_LOW 3
|
#define KMALLOC_SHIFT_LOW 3
|
||||||
#endif
|
#endif
|
||||||
|
@ -84,8 +84,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size,
|
|||||||
void *addr;
|
void *addr;
|
||||||
int ret = -ENOMEM;
|
int ret = -ENOMEM;
|
||||||
|
|
||||||
/* Cannot allocate larger than MAX_ORDER */
|
/* Cannot allocate larger than MAX_PAGE_ORDER */
|
||||||
order = min(get_order(pool_size), MAX_ORDER);
|
order = min(get_order(pool_size), MAX_PAGE_ORDER);
|
||||||
|
|
||||||
do {
|
do {
|
||||||
pool_size = 1 << (PAGE_SHIFT + order);
|
pool_size = 1 << (PAGE_SHIFT + order);
|
||||||
@ -190,7 +190,7 @@ static int __init dma_atomic_pool_init(void)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* If coherent_pool was not used on the command line, default the pool
|
* If coherent_pool was not used on the command line, default the pool
|
||||||
* sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER.
|
* sizes to 128KB per 1GB of memory, min 128KB, max MAX_PAGE_ORDER.
|
||||||
*/
|
*/
|
||||||
if (!atomic_pool_size) {
|
if (!atomic_pool_size) {
|
||||||
unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K);
|
unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K);
|
||||||
|
@ -686,8 +686,8 @@ static struct io_tlb_pool *swiotlb_alloc_pool(struct device *dev,
|
|||||||
size_t pool_size;
|
size_t pool_size;
|
||||||
size_t tlb_size;
|
size_t tlb_size;
|
||||||
|
|
||||||
if (nslabs > SLABS_PER_PAGE << MAX_ORDER) {
|
if (nslabs > SLABS_PER_PAGE << MAX_PAGE_ORDER) {
|
||||||
nslabs = SLABS_PER_PAGE << MAX_ORDER;
|
nslabs = SLABS_PER_PAGE << MAX_PAGE_ORDER;
|
||||||
nareas = limit_nareas(nareas, nslabs);
|
nareas = limit_nareas(nareas, nslabs);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -610,8 +610,8 @@ static struct page *rb_alloc_aux_page(int node, int order)
|
|||||||
{
|
{
|
||||||
struct page *page;
|
struct page *page;
|
||||||
|
|
||||||
if (order > MAX_ORDER)
|
if (order > MAX_PAGE_ORDER)
|
||||||
order = MAX_ORDER;
|
order = MAX_PAGE_ORDER;
|
||||||
|
|
||||||
do {
|
do {
|
||||||
page = alloc_pages_node(node, PERF_AUX_GFP, order);
|
page = alloc_pages_node(node, PERF_AUX_GFP, order);
|
||||||
@ -702,9 +702,9 @@ int rb_alloc_aux(struct perf_buffer *rb, struct perf_event *event,
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* kcalloc_node() is unable to allocate buffer if the size is larger
|
* kcalloc_node() is unable to allocate buffer if the size is larger
|
||||||
* than: PAGE_SIZE << MAX_ORDER; directly bail out in this case.
|
* than: PAGE_SIZE << MAX_PAGE_ORDER; directly bail out in this case.
|
||||||
*/
|
*/
|
||||||
if (get_order((unsigned long)nr_pages * sizeof(void *)) > MAX_ORDER)
|
if (get_order((unsigned long)nr_pages * sizeof(void *)) > MAX_PAGE_ORDER)
|
||||||
return -ENOMEM;
|
return -ENOMEM;
|
||||||
rb->aux_pages = kcalloc_node(nr_pages, sizeof(void *), GFP_KERNEL,
|
rb->aux_pages = kcalloc_node(nr_pages, sizeof(void *), GFP_KERNEL,
|
||||||
node);
|
node);
|
||||||
@ -821,7 +821,7 @@ struct perf_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
|
|||||||
size = sizeof(struct perf_buffer);
|
size = sizeof(struct perf_buffer);
|
||||||
size += nr_pages * sizeof(void *);
|
size += nr_pages * sizeof(void *);
|
||||||
|
|
||||||
if (order_base_2(size) > PAGE_SHIFT+MAX_ORDER)
|
if (order_base_2(size) > PAGE_SHIFT+MAX_PAGE_ORDER)
|
||||||
goto fail;
|
goto fail;
|
||||||
|
|
||||||
node = (cpu == -1) ? cpu : cpu_to_node(cpu);
|
node = (cpu == -1) ? cpu : cpu_to_node(cpu);
|
||||||
|
@ -381,7 +381,7 @@ config SHUFFLE_PAGE_ALLOCATOR
|
|||||||
the presence of a memory-side-cache. There are also incidental
|
the presence of a memory-side-cache. There are also incidental
|
||||||
security benefits as it reduces the predictability of page
|
security benefits as it reduces the predictability of page
|
||||||
allocations to compliment SLAB_FREELIST_RANDOM, but the
|
allocations to compliment SLAB_FREELIST_RANDOM, but the
|
||||||
default granularity of shuffling on the MAX_ORDER i.e, 10th
|
default granularity of shuffling on the MAX_PAGE_ORDER i.e, 10th
|
||||||
order of pages is selected based on cache utilization benefits
|
order of pages is selected based on cache utilization benefits
|
||||||
on x86.
|
on x86.
|
||||||
|
|
||||||
@ -713,8 +713,8 @@ config HUGETLB_PAGE_SIZE_VARIABLE
|
|||||||
HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available
|
HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available
|
||||||
on a platform.
|
on a platform.
|
||||||
|
|
||||||
Note that the pageblock_order cannot exceed MAX_ORDER and will be
|
Note that the pageblock_order cannot exceed MAX_PAGE_ORDER and will be
|
||||||
clamped down to MAX_ORDER.
|
clamped down to MAX_PAGE_ORDER.
|
||||||
|
|
||||||
config CONTIG_ALLOC
|
config CONTIG_ALLOC
|
||||||
def_bool (MEMORY_ISOLATION && COMPACTION) || CMA
|
def_bool (MEMORY_ISOLATION && COMPACTION) || CMA
|
||||||
|
@ -999,7 +999,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
|
|||||||
* a valid page order. Consider only values in the
|
* a valid page order. Consider only values in the
|
||||||
* valid order range to prevent low_pfn overflow.
|
* valid order range to prevent low_pfn overflow.
|
||||||
*/
|
*/
|
||||||
if (freepage_order > 0 && freepage_order <= MAX_ORDER) {
|
if (freepage_order > 0 && freepage_order <= MAX_PAGE_ORDER) {
|
||||||
low_pfn += (1UL << freepage_order) - 1;
|
low_pfn += (1UL << freepage_order) - 1;
|
||||||
nr_scanned += (1UL << freepage_order) - 1;
|
nr_scanned += (1UL << freepage_order) - 1;
|
||||||
}
|
}
|
||||||
@ -1017,7 +1017,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
|
|||||||
if (PageCompound(page) && !cc->alloc_contig) {
|
if (PageCompound(page) && !cc->alloc_contig) {
|
||||||
const unsigned int order = compound_order(page);
|
const unsigned int order = compound_order(page);
|
||||||
|
|
||||||
if (likely(order <= MAX_ORDER)) {
|
if (likely(order <= MAX_PAGE_ORDER)) {
|
||||||
low_pfn += (1UL << order) - 1;
|
low_pfn += (1UL << order) - 1;
|
||||||
nr_scanned += (1UL << order) - 1;
|
nr_scanned += (1UL << order) - 1;
|
||||||
}
|
}
|
||||||
|
@ -22,7 +22,7 @@ static int __init debug_guardpage_minorder_setup(char *buf)
|
|||||||
{
|
{
|
||||||
unsigned long res;
|
unsigned long res;
|
||||||
|
|
||||||
if (kstrtoul(buf, 10, &res) < 0 || res > MAX_ORDER / 2) {
|
if (kstrtoul(buf, 10, &res) < 0 || res > MAX_PAGE_ORDER / 2) {
|
||||||
pr_err("Bad debug_guardpage_minorder value\n");
|
pr_err("Bad debug_guardpage_minorder value\n");
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -1091,7 +1091,7 @@ debug_vm_pgtable_alloc_huge_page(struct pgtable_debug_args *args, int order)
|
|||||||
struct page *page = NULL;
|
struct page *page = NULL;
|
||||||
|
|
||||||
#ifdef CONFIG_CONTIG_ALLOC
|
#ifdef CONFIG_CONTIG_ALLOC
|
||||||
if (order > MAX_ORDER) {
|
if (order > MAX_PAGE_ORDER) {
|
||||||
page = alloc_contig_pages((1 << order), GFP_KERNEL,
|
page = alloc_contig_pages((1 << order), GFP_KERNEL,
|
||||||
first_online_node, NULL);
|
first_online_node, NULL);
|
||||||
if (page) {
|
if (page) {
|
||||||
@ -1101,7 +1101,7 @@ debug_vm_pgtable_alloc_huge_page(struct pgtable_debug_args *args, int order)
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
if (order <= MAX_ORDER)
|
if (order <= MAX_PAGE_ORDER)
|
||||||
page = alloc_pages(GFP_KERNEL, order);
|
page = alloc_pages(GFP_KERNEL, order);
|
||||||
|
|
||||||
return page;
|
return page;
|
||||||
|
@ -682,7 +682,7 @@ static int __init hugepage_init(void)
|
|||||||
/*
|
/*
|
||||||
* hugepages can't be allocated by the buddy allocator
|
* hugepages can't be allocated by the buddy allocator
|
||||||
*/
|
*/
|
||||||
MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER > MAX_ORDER);
|
MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER > MAX_PAGE_ORDER);
|
||||||
/*
|
/*
|
||||||
* we use page->mapping and page->index in second tail page
|
* we use page->mapping and page->index in second tail page
|
||||||
* as list_head: assuming THP order >= 2
|
* as list_head: assuming THP order >= 2
|
||||||
|
@ -3410,7 +3410,7 @@ static void __init prep_and_add_bootmem_folios(struct hstate *h,
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* Put bootmem huge pages into the standard lists after mem_map is up.
|
* Put bootmem huge pages into the standard lists after mem_map is up.
|
||||||
* Note: This only applies to gigantic (order > MAX_ORDER) pages.
|
* Note: This only applies to gigantic (order > MAX_PAGE_ORDER) pages.
|
||||||
*/
|
*/
|
||||||
static void __init gather_bootmem_prealloc(void)
|
static void __init gather_bootmem_prealloc(void)
|
||||||
{
|
{
|
||||||
@ -4790,7 +4790,7 @@ static int __init default_hugepagesz_setup(char *s)
|
|||||||
* The number of default huge pages (for this size) could have been
|
* The number of default huge pages (for this size) could have been
|
||||||
* specified as the first hugetlb parameter: hugepages=X. If so,
|
* specified as the first hugetlb parameter: hugepages=X. If so,
|
||||||
* then default_hstate_max_huge_pages is set. If the default huge
|
* then default_hstate_max_huge_pages is set. If the default huge
|
||||||
* page size is gigantic (> MAX_ORDER), then the pages must be
|
* page size is gigantic (> MAX_PAGE_ORDER), then the pages must be
|
||||||
* allocated here from bootmem allocator.
|
* allocated here from bootmem allocator.
|
||||||
*/
|
*/
|
||||||
if (default_hstate_max_huge_pages) {
|
if (default_hstate_max_huge_pages) {
|
||||||
|
@ -335,7 +335,7 @@ static inline bool page_is_buddy(struct page *page, struct page *buddy,
|
|||||||
* satisfies the following equation:
|
* satisfies the following equation:
|
||||||
* P = B & ~(1 << O)
|
* P = B & ~(1 << O)
|
||||||
*
|
*
|
||||||
* Assumption: *_mem_map is contiguous at least up to MAX_ORDER
|
* Assumption: *_mem_map is contiguous at least up to MAX_PAGE_ORDER
|
||||||
*/
|
*/
|
||||||
static inline unsigned long
|
static inline unsigned long
|
||||||
__find_buddy_pfn(unsigned long page_pfn, unsigned int order)
|
__find_buddy_pfn(unsigned long page_pfn, unsigned int order)
|
||||||
|
@ -141,7 +141,7 @@ struct smallstack {
|
|||||||
|
|
||||||
static struct smallstack collect = {
|
static struct smallstack collect = {
|
||||||
.index = 0,
|
.index = 0,
|
||||||
.order = MAX_ORDER,
|
.order = MAX_PAGE_ORDER,
|
||||||
};
|
};
|
||||||
|
|
||||||
static void smallstack_push(struct smallstack *stack, struct page *pages)
|
static void smallstack_push(struct smallstack *stack, struct page *pages)
|
||||||
@ -211,8 +211,8 @@ static void kmsan_memblock_discard(void)
|
|||||||
* order=N-1,
|
* order=N-1,
|
||||||
* - repeat.
|
* - repeat.
|
||||||
*/
|
*/
|
||||||
collect.order = MAX_ORDER;
|
collect.order = MAX_PAGE_ORDER;
|
||||||
for (int i = MAX_ORDER; i >= 0; i--) {
|
for (int i = MAX_PAGE_ORDER; i >= 0; i--) {
|
||||||
if (held_back[i].shadow)
|
if (held_back[i].shadow)
|
||||||
smallstack_push(&collect, held_back[i].shadow);
|
smallstack_push(&collect, held_back[i].shadow);
|
||||||
if (held_back[i].origin)
|
if (held_back[i].origin)
|
||||||
|
@ -2113,12 +2113,13 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end)
|
|||||||
* Free the pages in the largest chunks alignment allows.
|
* Free the pages in the largest chunks alignment allows.
|
||||||
*
|
*
|
||||||
* __ffs() behaviour is undefined for 0. start == 0 is
|
* __ffs() behaviour is undefined for 0. start == 0 is
|
||||||
* MAX_ORDER-aligned, set order to MAX_ORDER for the case.
|
* MAX_PAGE_ORDER-aligned, set order to MAX_PAGE_ORDER for
|
||||||
|
* the case.
|
||||||
*/
|
*/
|
||||||
if (start)
|
if (start)
|
||||||
order = min_t(int, MAX_ORDER, __ffs(start));
|
order = min_t(int, MAX_PAGE_ORDER, __ffs(start));
|
||||||
else
|
else
|
||||||
order = MAX_ORDER;
|
order = MAX_PAGE_ORDER;
|
||||||
|
|
||||||
while (start + (1UL << order) > end)
|
while (start + (1UL << order) > end)
|
||||||
order--;
|
order--;
|
||||||
|
@ -645,7 +645,7 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages)
|
|||||||
unsigned long pfn;
|
unsigned long pfn;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Online the pages in MAX_ORDER aligned chunks. The callback might
|
* Online the pages in MAX_PAGE_ORDER aligned chunks. The callback might
|
||||||
* decide to not expose all pages to the buddy (e.g., expose them
|
* decide to not expose all pages to the buddy (e.g., expose them
|
||||||
* later). We account all pages as being online and belonging to this
|
* later). We account all pages as being online and belonging to this
|
||||||
* zone ("present").
|
* zone ("present").
|
||||||
@ -660,12 +660,13 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages)
|
|||||||
* Free to online pages in the largest chunks alignment allows.
|
* Free to online pages in the largest chunks alignment allows.
|
||||||
*
|
*
|
||||||
* __ffs() behaviour is undefined for 0. start == 0 is
|
* __ffs() behaviour is undefined for 0. start == 0 is
|
||||||
* MAX_ORDER-aligned, Set order to MAX_ORDER for the case.
|
* MAX_PAGE_ORDER-aligned, Set order to MAX_PAGE_ORDER for
|
||||||
|
* the case.
|
||||||
*/
|
*/
|
||||||
if (pfn)
|
if (pfn)
|
||||||
order = min_t(int, MAX_ORDER, __ffs(pfn));
|
order = min_t(int, MAX_PAGE_ORDER, __ffs(pfn));
|
||||||
else
|
else
|
||||||
order = MAX_ORDER;
|
order = MAX_PAGE_ORDER;
|
||||||
|
|
||||||
(*online_page_callback)(pfn_to_page(pfn), order);
|
(*online_page_callback)(pfn_to_page(pfn), order);
|
||||||
pfn += (1UL << order);
|
pfn += (1UL << order);
|
||||||
|
22
mm/mm_init.c
22
mm/mm_init.c
@ -1455,7 +1455,7 @@ static inline void setup_usemap(struct zone *zone) {}
|
|||||||
/* Initialise the number of pages represented by NR_PAGEBLOCK_BITS */
|
/* Initialise the number of pages represented by NR_PAGEBLOCK_BITS */
|
||||||
void __init set_pageblock_order(void)
|
void __init set_pageblock_order(void)
|
||||||
{
|
{
|
||||||
unsigned int order = MAX_ORDER;
|
unsigned int order = MAX_PAGE_ORDER;
|
||||||
|
|
||||||
/* Check that pageblock_nr_pages has not already been setup */
|
/* Check that pageblock_nr_pages has not already been setup */
|
||||||
if (pageblock_order)
|
if (pageblock_order)
|
||||||
@ -1638,7 +1638,7 @@ static void __init alloc_node_mem_map(struct pglist_data *pgdat)
|
|||||||
start = pgdat->node_start_pfn & ~(MAX_ORDER_NR_PAGES - 1);
|
start = pgdat->node_start_pfn & ~(MAX_ORDER_NR_PAGES - 1);
|
||||||
offset = pgdat->node_start_pfn - start;
|
offset = pgdat->node_start_pfn - start;
|
||||||
/*
|
/*
|
||||||
* The zone's endpoints aren't required to be MAX_ORDER
|
* The zone's endpoints aren't required to be MAX_PAGE_ORDER
|
||||||
* aligned but the node_mem_map endpoints must be in order
|
* aligned but the node_mem_map endpoints must be in order
|
||||||
* for the buddy allocator to function correctly.
|
* for the buddy allocator to function correctly.
|
||||||
*/
|
*/
|
||||||
@ -1964,11 +1964,11 @@ static void __init deferred_free_range(unsigned long pfn,
|
|||||||
if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) {
|
if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) {
|
||||||
for (i = 0; i < nr_pages; i += pageblock_nr_pages)
|
for (i = 0; i < nr_pages; i += pageblock_nr_pages)
|
||||||
set_pageblock_migratetype(page + i, MIGRATE_MOVABLE);
|
set_pageblock_migratetype(page + i, MIGRATE_MOVABLE);
|
||||||
__free_pages_core(page, MAX_ORDER);
|
__free_pages_core(page, MAX_PAGE_ORDER);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Accept chunks smaller than MAX_ORDER upfront */
|
/* Accept chunks smaller than MAX_PAGE_ORDER upfront */
|
||||||
accept_memory(PFN_PHYS(pfn), PFN_PHYS(pfn + nr_pages));
|
accept_memory(PFN_PHYS(pfn), PFN_PHYS(pfn + nr_pages));
|
||||||
|
|
||||||
for (i = 0; i < nr_pages; i++, page++, pfn++) {
|
for (i = 0; i < nr_pages; i++, page++, pfn++) {
|
||||||
@ -1991,8 +1991,8 @@ static inline void __init pgdat_init_report_one_done(void)
|
|||||||
/*
|
/*
|
||||||
* Returns true if page needs to be initialized or freed to buddy allocator.
|
* Returns true if page needs to be initialized or freed to buddy allocator.
|
||||||
*
|
*
|
||||||
* We check if a current MAX_ORDER block is valid by only checking the validity
|
* We check if a current MAX_PAGE_ORDER block is valid by only checking the
|
||||||
* of the head pfn.
|
* validity of the head pfn.
|
||||||
*/
|
*/
|
||||||
static inline bool __init deferred_pfn_valid(unsigned long pfn)
|
static inline bool __init deferred_pfn_valid(unsigned long pfn)
|
||||||
{
|
{
|
||||||
@ -2149,8 +2149,8 @@ deferred_init_memmap_chunk(unsigned long start_pfn, unsigned long end_pfn,
|
|||||||
deferred_init_mem_pfn_range_in_zone(&i, zone, &spfn, &epfn, start_pfn);
|
deferred_init_mem_pfn_range_in_zone(&i, zone, &spfn, &epfn, start_pfn);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Initialize and free pages in MAX_ORDER sized increments so that we
|
* Initialize and free pages in MAX_PAGE_ORDER sized increments so that
|
||||||
* can avoid introducing any issues with the buddy allocator.
|
* we can avoid introducing any issues with the buddy allocator.
|
||||||
*/
|
*/
|
||||||
while (spfn < end_pfn) {
|
while (spfn < end_pfn) {
|
||||||
deferred_init_maxorder(&i, zone, &spfn, &epfn);
|
deferred_init_maxorder(&i, zone, &spfn, &epfn);
|
||||||
@ -2291,7 +2291,7 @@ bool __init deferred_grow_zone(struct zone *zone, unsigned int order)
|
|||||||
}
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Initialize and free pages in MAX_ORDER sized increments so
|
* Initialize and free pages in MAX_PAGE_ORDER sized increments so
|
||||||
* that we can avoid introducing any issues with the buddy
|
* that we can avoid introducing any issues with the buddy
|
||||||
* allocator.
|
* allocator.
|
||||||
*/
|
*/
|
||||||
@ -2509,7 +2509,7 @@ void *__init alloc_large_system_hash(const char *tablename,
|
|||||||
else
|
else
|
||||||
table = memblock_alloc_raw(size,
|
table = memblock_alloc_raw(size,
|
||||||
SMP_CACHE_BYTES);
|
SMP_CACHE_BYTES);
|
||||||
} else if (get_order(size) > MAX_ORDER || hashdist) {
|
} else if (get_order(size) > MAX_PAGE_ORDER || hashdist) {
|
||||||
table = vmalloc_huge(size, gfp_flags);
|
table = vmalloc_huge(size, gfp_flags);
|
||||||
virt = true;
|
virt = true;
|
||||||
if (table)
|
if (table)
|
||||||
@ -2756,7 +2756,7 @@ void __init mm_core_init(void)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* page_ext requires contiguous pages,
|
* page_ext requires contiguous pages,
|
||||||
* bigger than MAX_ORDER unless SPARSEMEM.
|
* bigger than MAX_PAGE_ORDER unless SPARSEMEM.
|
||||||
*/
|
*/
|
||||||
page_ext_init_flatmem();
|
page_ext_init_flatmem();
|
||||||
mem_debugging_and_hardening_init();
|
mem_debugging_and_hardening_init();
|
||||||
|
@ -727,7 +727,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
|
|||||||
unsigned long higher_page_pfn;
|
unsigned long higher_page_pfn;
|
||||||
struct page *higher_page;
|
struct page *higher_page;
|
||||||
|
|
||||||
if (order >= MAX_ORDER - 1)
|
if (order >= MAX_PAGE_ORDER - 1)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
higher_page_pfn = buddy_pfn & pfn;
|
higher_page_pfn = buddy_pfn & pfn;
|
||||||
@ -782,7 +782,7 @@ static inline void __free_one_page(struct page *page,
|
|||||||
VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
|
VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page);
|
||||||
VM_BUG_ON_PAGE(bad_range(zone, page), page);
|
VM_BUG_ON_PAGE(bad_range(zone, page), page);
|
||||||
|
|
||||||
while (order < MAX_ORDER) {
|
while (order < MAX_PAGE_ORDER) {
|
||||||
if (compaction_capture(capc, page, order, migratetype)) {
|
if (compaction_capture(capc, page, order, migratetype)) {
|
||||||
__mod_zone_freepage_state(zone, -(1 << order),
|
__mod_zone_freepage_state(zone, -(1 << order),
|
||||||
migratetype);
|
migratetype);
|
||||||
@ -1297,7 +1297,7 @@ void __free_pages_core(struct page *page, unsigned int order)
|
|||||||
atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
|
atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
|
||||||
|
|
||||||
if (page_contains_unaccepted(page, order)) {
|
if (page_contains_unaccepted(page, order)) {
|
||||||
if (order == MAX_ORDER && __free_unaccepted(page))
|
if (order == MAX_PAGE_ORDER && __free_unaccepted(page))
|
||||||
return;
|
return;
|
||||||
|
|
||||||
accept_page(page, order);
|
accept_page(page, order);
|
||||||
@ -1327,7 +1327,7 @@ void __free_pages_core(struct page *page, unsigned int order)
|
|||||||
*
|
*
|
||||||
* Note: the function may return non-NULL struct page even for a page block
|
* Note: the function may return non-NULL struct page even for a page block
|
||||||
* which contains a memory hole (i.e. there is no physical memory for a subset
|
* which contains a memory hole (i.e. there is no physical memory for a subset
|
||||||
* of the pfn range). For example, if the pageblock order is MAX_ORDER, which
|
* of the pfn range). For example, if the pageblock order is MAX_PAGE_ORDER, which
|
||||||
* will fall into 2 sub-sections, and the end pfn of the pageblock may be hole
|
* will fall into 2 sub-sections, and the end pfn of the pageblock may be hole
|
||||||
* even though the start pfn is online and valid. This should be safe most of
|
* even though the start pfn is online and valid. This should be safe most of
|
||||||
* the time because struct pages are still initialized via init_unavailable_range()
|
* the time because struct pages are still initialized via init_unavailable_range()
|
||||||
@ -2018,7 +2018,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
|
|||||||
* approximates finding the pageblock with the most free pages, which
|
* approximates finding the pageblock with the most free pages, which
|
||||||
* would be too costly to do exactly.
|
* would be too costly to do exactly.
|
||||||
*/
|
*/
|
||||||
for (current_order = MAX_ORDER; current_order >= min_order;
|
for (current_order = MAX_PAGE_ORDER; current_order >= min_order;
|
||||||
--current_order) {
|
--current_order) {
|
||||||
area = &(zone->free_area[current_order]);
|
area = &(zone->free_area[current_order]);
|
||||||
fallback_mt = find_suitable_fallback(area, current_order,
|
fallback_mt = find_suitable_fallback(area, current_order,
|
||||||
@ -2056,7 +2056,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
|
|||||||
* This should not happen - we already found a suitable fallback
|
* This should not happen - we already found a suitable fallback
|
||||||
* when looking for the largest page.
|
* when looking for the largest page.
|
||||||
*/
|
*/
|
||||||
VM_BUG_ON(current_order > MAX_ORDER);
|
VM_BUG_ON(current_order > MAX_PAGE_ORDER);
|
||||||
|
|
||||||
do_steal:
|
do_steal:
|
||||||
page = get_page_from_free_area(area, fallback_mt);
|
page = get_page_from_free_area(area, fallback_mt);
|
||||||
@ -4533,7 +4533,7 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid,
|
|||||||
* There are several places where we assume that the order value is sane
|
* There are several places where we assume that the order value is sane
|
||||||
* so bail out early if the request is out of bound.
|
* so bail out early if the request is out of bound.
|
||||||
*/
|
*/
|
||||||
if (WARN_ON_ONCE_GFP(order > MAX_ORDER, gfp))
|
if (WARN_ON_ONCE_GFP(order > MAX_PAGE_ORDER, gfp))
|
||||||
return NULL;
|
return NULL;
|
||||||
|
|
||||||
gfp &= gfp_allowed_mask;
|
gfp &= gfp_allowed_mask;
|
||||||
@ -4815,7 +4815,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order,
|
|||||||
* minimum number of pages to satisfy the request. alloc_pages() can only
|
* minimum number of pages to satisfy the request. alloc_pages() can only
|
||||||
* allocate memory in power-of-two pages.
|
* allocate memory in power-of-two pages.
|
||||||
*
|
*
|
||||||
* This function is also limited by MAX_ORDER.
|
* This function is also limited by MAX_PAGE_ORDER.
|
||||||
*
|
*
|
||||||
* Memory allocated by this function must be released by free_pages_exact().
|
* Memory allocated by this function must be released by free_pages_exact().
|
||||||
*
|
*
|
||||||
@ -6373,7 +6373,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
|
|||||||
order = 0;
|
order = 0;
|
||||||
outer_start = start;
|
outer_start = start;
|
||||||
while (!PageBuddy(pfn_to_page(outer_start))) {
|
while (!PageBuddy(pfn_to_page(outer_start))) {
|
||||||
if (++order > MAX_ORDER) {
|
if (++order > MAX_PAGE_ORDER) {
|
||||||
outer_start = start;
|
outer_start = start;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
@ -6635,7 +6635,7 @@ bool is_free_buddy_page(struct page *page)
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
return order <= MAX_ORDER;
|
return order <= MAX_PAGE_ORDER;
|
||||||
}
|
}
|
||||||
EXPORT_SYMBOL(is_free_buddy_page);
|
EXPORT_SYMBOL(is_free_buddy_page);
|
||||||
|
|
||||||
@ -6807,9 +6807,9 @@ static bool try_to_accept_memory_one(struct zone *zone)
|
|||||||
__mod_zone_page_state(zone, NR_UNACCEPTED, -MAX_ORDER_NR_PAGES);
|
__mod_zone_page_state(zone, NR_UNACCEPTED, -MAX_ORDER_NR_PAGES);
|
||||||
spin_unlock_irqrestore(&zone->lock, flags);
|
spin_unlock_irqrestore(&zone->lock, flags);
|
||||||
|
|
||||||
accept_page(page, MAX_ORDER);
|
accept_page(page, MAX_PAGE_ORDER);
|
||||||
|
|
||||||
__free_pages_ok(page, MAX_ORDER, FPI_TO_TAIL);
|
__free_pages_ok(page, MAX_PAGE_ORDER, FPI_TO_TAIL);
|
||||||
|
|
||||||
if (last)
|
if (last)
|
||||||
static_branch_dec(&zones_with_unaccepted_pages);
|
static_branch_dec(&zones_with_unaccepted_pages);
|
||||||
|
@ -226,7 +226,7 @@ static void unset_migratetype_isolate(struct page *page, int migratetype)
|
|||||||
*/
|
*/
|
||||||
if (PageBuddy(page)) {
|
if (PageBuddy(page)) {
|
||||||
order = buddy_order(page);
|
order = buddy_order(page);
|
||||||
if (order >= pageblock_order && order < MAX_ORDER) {
|
if (order >= pageblock_order && order < MAX_PAGE_ORDER) {
|
||||||
buddy = find_buddy_page_pfn(page, page_to_pfn(page),
|
buddy = find_buddy_page_pfn(page, page_to_pfn(page),
|
||||||
order, NULL);
|
order, NULL);
|
||||||
if (buddy && !is_migrate_isolate_page(buddy)) {
|
if (buddy && !is_migrate_isolate_page(buddy)) {
|
||||||
@ -290,11 +290,12 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages)
|
|||||||
* isolate_single_pageblock()
|
* isolate_single_pageblock()
|
||||||
* @migratetype: migrate type to set in error recovery.
|
* @migratetype: migrate type to set in error recovery.
|
||||||
*
|
*
|
||||||
* Free and in-use pages can be as big as MAX_ORDER and contain more than one
|
* Free and in-use pages can be as big as MAX_PAGE_ORDER and contain more than one
|
||||||
* pageblock. When not all pageblocks within a page are isolated at the same
|
* pageblock. When not all pageblocks within a page are isolated at the same
|
||||||
* time, free page accounting can go wrong. For example, in the case of
|
* time, free page accounting can go wrong. For example, in the case of
|
||||||
* MAX_ORDER = pageblock_order + 1, a MAX_ORDER page has two pagelbocks.
|
* MAX_PAGE_ORDER = pageblock_order + 1, a MAX_PAGE_ORDER page has two
|
||||||
* [ MAX_ORDER ]
|
* pagelbocks.
|
||||||
|
* [ MAX_PAGE_ORDER ]
|
||||||
* [ pageblock0 | pageblock1 ]
|
* [ pageblock0 | pageblock1 ]
|
||||||
* When either pageblock is isolated, if it is a free page, the page is not
|
* When either pageblock is isolated, if it is a free page, the page is not
|
||||||
* split into separate migratetype lists, which is supposed to; if it is an
|
* split into separate migratetype lists, which is supposed to; if it is an
|
||||||
@ -451,7 +452,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
|
|||||||
* the free page to the right migratetype list.
|
* the free page to the right migratetype list.
|
||||||
*
|
*
|
||||||
* head_pfn is not used here as a hugetlb page order
|
* head_pfn is not used here as a hugetlb page order
|
||||||
* can be bigger than MAX_ORDER, but after it is
|
* can be bigger than MAX_PAGE_ORDER, but after it is
|
||||||
* freed, the free page order is not. Use pfn within
|
* freed, the free page order is not. Use pfn within
|
||||||
* the range to find the head of the free page.
|
* the range to find the head of the free page.
|
||||||
*/
|
*/
|
||||||
@ -459,7 +460,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
|
|||||||
outer_pfn = pfn;
|
outer_pfn = pfn;
|
||||||
while (!PageBuddy(pfn_to_page(outer_pfn))) {
|
while (!PageBuddy(pfn_to_page(outer_pfn))) {
|
||||||
/* stop if we cannot find the free page */
|
/* stop if we cannot find the free page */
|
||||||
if (++order > MAX_ORDER)
|
if (++order > MAX_PAGE_ORDER)
|
||||||
goto failed;
|
goto failed;
|
||||||
outer_pfn &= ~0UL << order;
|
outer_pfn &= ~0UL << order;
|
||||||
}
|
}
|
||||||
@ -660,8 +661,8 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
|
|||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Note: pageblock_nr_pages != MAX_ORDER. Then, chunks of free pages
|
* Note: pageblock_nr_pages != MAX_PAGE_ORDER. Then, chunks of free
|
||||||
* are not aligned to pageblock_nr_pages.
|
* pages are not aligned to pageblock_nr_pages.
|
||||||
* Then we just check migratetype first.
|
* Then we just check migratetype first.
|
||||||
*/
|
*/
|
||||||
for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) {
|
for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) {
|
||||||
|
@ -320,7 +320,7 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
|
|||||||
unsigned long freepage_order;
|
unsigned long freepage_order;
|
||||||
|
|
||||||
freepage_order = buddy_order_unsafe(page);
|
freepage_order = buddy_order_unsafe(page);
|
||||||
if (freepage_order <= MAX_ORDER)
|
if (freepage_order <= MAX_PAGE_ORDER)
|
||||||
pfn += (1UL << freepage_order) - 1;
|
pfn += (1UL << freepage_order) - 1;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
@ -555,7 +555,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
|
|||||||
if (PageBuddy(page)) {
|
if (PageBuddy(page)) {
|
||||||
unsigned long freepage_order = buddy_order_unsafe(page);
|
unsigned long freepage_order = buddy_order_unsafe(page);
|
||||||
|
|
||||||
if (freepage_order <= MAX_ORDER)
|
if (freepage_order <= MAX_PAGE_ORDER)
|
||||||
pfn += (1UL << freepage_order) - 1;
|
pfn += (1UL << freepage_order) - 1;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
@ -663,7 +663,7 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone)
|
|||||||
if (PageBuddy(page)) {
|
if (PageBuddy(page)) {
|
||||||
unsigned long order = buddy_order_unsafe(page);
|
unsigned long order = buddy_order_unsafe(page);
|
||||||
|
|
||||||
if (order > 0 && order <= MAX_ORDER)
|
if (order > 0 && order <= MAX_PAGE_ORDER)
|
||||||
pfn += (1UL << order) - 1;
|
pfn += (1UL << order) - 1;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
@ -20,7 +20,7 @@ static int page_order_update_notify(const char *val, const struct kernel_param *
|
|||||||
* If param is set beyond this limit, order is set to default
|
* If param is set beyond this limit, order is set to default
|
||||||
* pageblock_order value
|
* pageblock_order value
|
||||||
*/
|
*/
|
||||||
return param_set_uint_minmax(val, kp, 0, MAX_ORDER);
|
return param_set_uint_minmax(val, kp, 0, MAX_PAGE_ORDER);
|
||||||
}
|
}
|
||||||
|
|
||||||
static const struct kernel_param_ops page_reporting_param_ops = {
|
static const struct kernel_param_ops page_reporting_param_ops = {
|
||||||
@ -370,7 +370,7 @@ int page_reporting_register(struct page_reporting_dev_info *prdev)
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
if (page_reporting_order == -1) {
|
if (page_reporting_order == -1) {
|
||||||
if (prdev->order > 0 && prdev->order <= MAX_ORDER)
|
if (prdev->order > 0 && prdev->order <= MAX_PAGE_ORDER)
|
||||||
page_reporting_order = prdev->order;
|
page_reporting_order = prdev->order;
|
||||||
else
|
else
|
||||||
page_reporting_order = pageblock_order;
|
page_reporting_order = pageblock_order;
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
#define _MM_SHUFFLE_H
|
#define _MM_SHUFFLE_H
|
||||||
#include <linux/jump_label.h>
|
#include <linux/jump_label.h>
|
||||||
|
|
||||||
#define SHUFFLE_ORDER MAX_ORDER
|
#define SHUFFLE_ORDER MAX_PAGE_ORDER
|
||||||
|
|
||||||
#ifdef CONFIG_SHUFFLE_PAGE_ALLOCATOR
|
#ifdef CONFIG_SHUFFLE_PAGE_ALLOCATOR
|
||||||
DECLARE_STATIC_KEY_FALSE(page_alloc_shuffle_key);
|
DECLARE_STATIC_KEY_FALSE(page_alloc_shuffle_key);
|
||||||
|
@ -465,7 +465,7 @@ static int __init slab_max_order_setup(char *str)
|
|||||||
{
|
{
|
||||||
get_option(&str, &slab_max_order);
|
get_option(&str, &slab_max_order);
|
||||||
slab_max_order = slab_max_order < 0 ? 0 :
|
slab_max_order = slab_max_order < 0 ? 0 :
|
||||||
min(slab_max_order, MAX_ORDER);
|
min(slab_max_order, MAX_PAGE_ORDER);
|
||||||
slab_max_order_set = true;
|
slab_max_order_set = true;
|
||||||
|
|
||||||
return 1;
|
return 1;
|
||||||
|
@ -4194,7 +4194,7 @@ static inline int calculate_order(unsigned int size)
|
|||||||
* Doh this slab cannot be placed using slub_max_order.
|
* Doh this slab cannot be placed using slub_max_order.
|
||||||
*/
|
*/
|
||||||
order = get_order(size);
|
order = get_order(size);
|
||||||
if (order <= MAX_ORDER)
|
if (order <= MAX_PAGE_ORDER)
|
||||||
return order;
|
return order;
|
||||||
return -ENOSYS;
|
return -ENOSYS;
|
||||||
}
|
}
|
||||||
@ -4722,7 +4722,7 @@ __setup("slub_min_order=", setup_slub_min_order);
|
|||||||
static int __init setup_slub_max_order(char *str)
|
static int __init setup_slub_max_order(char *str)
|
||||||
{
|
{
|
||||||
get_option(&str, (int *)&slub_max_order);
|
get_option(&str, (int *)&slub_max_order);
|
||||||
slub_max_order = min_t(unsigned int, slub_max_order, MAX_ORDER);
|
slub_max_order = min_t(unsigned int, slub_max_order, MAX_PAGE_ORDER);
|
||||||
|
|
||||||
if (slub_min_order > slub_max_order)
|
if (slub_min_order > slub_max_order)
|
||||||
slub_min_order = slub_max_order;
|
slub_min_order = slub_max_order;
|
||||||
|
@ -6415,7 +6415,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
|
|||||||
* scan_control uses s8 fields for order, priority, and reclaim_idx.
|
* scan_control uses s8 fields for order, priority, and reclaim_idx.
|
||||||
* Confirm they are large enough for max values.
|
* Confirm they are large enough for max values.
|
||||||
*/
|
*/
|
||||||
BUILD_BUG_ON(MAX_ORDER >= S8_MAX);
|
BUILD_BUG_ON(MAX_PAGE_ORDER >= S8_MAX);
|
||||||
BUILD_BUG_ON(DEF_PRIORITY > S8_MAX);
|
BUILD_BUG_ON(DEF_PRIORITY > S8_MAX);
|
||||||
BUILD_BUG_ON(MAX_NR_ZONES > S8_MAX);
|
BUILD_BUG_ON(MAX_NR_ZONES > S8_MAX);
|
||||||
|
|
||||||
|
@ -1092,7 +1092,7 @@ static int __fragmentation_index(unsigned int order, struct contig_page_info *in
|
|||||||
{
|
{
|
||||||
unsigned long requested = 1UL << order;
|
unsigned long requested = 1UL << order;
|
||||||
|
|
||||||
if (WARN_ON_ONCE(order > MAX_ORDER))
|
if (WARN_ON_ONCE(order > MAX_PAGE_ORDER))
|
||||||
return 0;
|
return 0;
|
||||||
|
|
||||||
if (!info->free_blocks_total)
|
if (!info->free_blocks_total)
|
||||||
|
@ -844,7 +844,7 @@ long smc_ib_setup_per_ibdev(struct smc_ib_device *smcibdev)
|
|||||||
goto out;
|
goto out;
|
||||||
/* the calculated number of cq entries fits to mlx5 cq allocation */
|
/* the calculated number of cq entries fits to mlx5 cq allocation */
|
||||||
cqe_size_order = cache_line_size() == 128 ? 7 : 6;
|
cqe_size_order = cache_line_size() == 128 ? 7 : 6;
|
||||||
smc_order = MAX_ORDER - cqe_size_order;
|
smc_order = MAX_PAGE_ORDER - cqe_size_order;
|
||||||
if (SMC_MAX_CQE + 2 > (0x00000001 << smc_order) * PAGE_SIZE)
|
if (SMC_MAX_CQE + 2 > (0x00000001 << smc_order) * PAGE_SIZE)
|
||||||
cqattr.cqe = (0x00000001 << smc_order) * PAGE_SIZE - 2;
|
cqattr.cqe = (0x00000001 << smc_order) * PAGE_SIZE - 2;
|
||||||
smcibdev->roce_cq_send = ib_create_cq(smcibdev->ibdev,
|
smcibdev->roce_cq_send = ib_create_cq(smcibdev->ibdev,
|
||||||
|
@ -38,7 +38,7 @@ static int param_set_bufsize(const char *val, const struct kernel_param *kp)
|
|||||||
|
|
||||||
size = memparse(val, NULL);
|
size = memparse(val, NULL);
|
||||||
order = get_order(size);
|
order = get_order(size);
|
||||||
if (order > MAX_ORDER)
|
if (order > MAX_PAGE_ORDER)
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
ima_maxorder = order;
|
ima_maxorder = order;
|
||||||
ima_bufsize = PAGE_SIZE << order;
|
ima_bufsize = PAGE_SIZE << order;
|
||||||
|
@ -683,7 +683,7 @@ Buffer handling
|
|||||||
~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
There may be buffer limitations (i.e. single ToPa entry) which means that actual
|
There may be buffer limitations (i.e. single ToPa entry) which means that actual
|
||||||
buffer sizes are limited to powers of 2 up to 4MiB (MAX_ORDER). In order to
|
buffer sizes are limited to powers of 2 up to 4MiB (MAX_PAGE_ORDER). In order to
|
||||||
provide other sizes, and in particular an arbitrarily large size, multiple
|
provide other sizes, and in particular an arbitrarily large size, multiple
|
||||||
buffers are logically concatenated. However an interrupt must be used to switch
|
buffers are logically concatenated. However an interrupt must be used to switch
|
||||||
between buffers. That has two potential problems:
|
between buffers. That has two potential problems:
|
||||||
|
@ -17,10 +17,10 @@ enum zone_type {
|
|||||||
};
|
};
|
||||||
|
|
||||||
#define MAX_NR_ZONES __MAX_NR_ZONES
|
#define MAX_NR_ZONES __MAX_NR_ZONES
|
||||||
#define MAX_ORDER 10
|
#define MAX_PAGE_ORDER 10
|
||||||
#define MAX_ORDER_NR_PAGES (1 << MAX_ORDER)
|
#define MAX_ORDER_NR_PAGES (1 << MAX_PAGE_ORDER)
|
||||||
|
|
||||||
#define pageblock_order MAX_ORDER
|
#define pageblock_order MAX_PAGE_ORDER
|
||||||
#define pageblock_nr_pages BIT(pageblock_order)
|
#define pageblock_nr_pages BIT(pageblock_order)
|
||||||
#define pageblock_align(pfn) ALIGN((pfn), pageblock_nr_pages)
|
#define pageblock_align(pfn) ALIGN((pfn), pageblock_nr_pages)
|
||||||
#define pageblock_start_pfn(pfn) ALIGN_DOWN((pfn), pageblock_nr_pages)
|
#define pageblock_start_pfn(pfn) ALIGN_DOWN((pfn), pageblock_nr_pages)
|
||||||
|
@ -3,7 +3,8 @@
|
|||||||
|
|
||||||
Before running this huge pages for each huge page size must have been
|
Before running this huge pages for each huge page size must have been
|
||||||
reserved.
|
reserved.
|
||||||
For large pages beyond MAX_ORDER (like 1GB on x86) boot options must be used.
|
For large pages beyond MAX_PAGE_ORDER (like 1GB on x86) boot options must
|
||||||
|
be used.
|
||||||
Also shmmax must be increased.
|
Also shmmax must be increased.
|
||||||
And you need to run as root to work around some weird permissions in shm.
|
And you need to run as root to work around some weird permissions in shm.
|
||||||
And nothing using huge pages should run in parallel.
|
And nothing using huge pages should run in parallel.
|
||||||
|
Loading…
Reference in New Issue
Block a user