mm: mprotect: use a folio in change_pte_range()

Use a folio in change_pte_range() to save three compound_head() calls.
Since now only normal and PMD-mapped page is handled by numa balancing,
it is enough to only update the entire folio's access time.

Link: https://lkml.kernel.org/r/20231018140806.2783514-10-wangkefeng.wang@huawei.com
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Kefeng Wang 2023-10-18 22:07:56 +08:00 committed by Andrew Morton
parent 0b201c3624
commit ec1778807a

View File

@ -114,7 +114,7 @@ static long change_pte_range(struct mmu_gather *tlb,
* pages. See similar comment in change_huge_pmd.
*/
if (prot_numa) {
struct page *page;
struct folio *folio;
int nid;
bool toptier;
@ -122,13 +122,14 @@ static long change_pte_range(struct mmu_gather *tlb,
if (pte_protnone(oldpte))
continue;
page = vm_normal_page(vma, addr, oldpte);
if (!page || is_zone_device_page(page) || PageKsm(page))
folio = vm_normal_folio(vma, addr, oldpte);
if (!folio || folio_is_zone_device(folio) ||
folio_test_ksm(folio))
continue;
/* Also skip shared copy-on-write pages */
if (is_cow_mapping(vma->vm_flags) &&
page_count(page) != 1)
folio_ref_count(folio) != 1)
continue;
/*
@ -136,14 +137,15 @@ static long change_pte_range(struct mmu_gather *tlb,
* it cannot move them all from MIGRATE_ASYNC
* context.
*/
if (page_is_file_lru(page) && PageDirty(page))
if (folio_is_file_lru(folio) &&
folio_test_dirty(folio))
continue;
/*
* Don't mess with PTEs if page is already on the node
* a single-threaded process is running on.
*/
nid = page_to_nid(page);
nid = folio_nid(folio);
if (target_node == nid)
continue;
toptier = node_is_toptier(nid);
@ -157,7 +159,7 @@ static long change_pte_range(struct mmu_gather *tlb,
continue;
if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
!toptier)
xchg_page_access_time(page,
folio_xchg_access_time(folio,
jiffies_to_msecs(jiffies));
}