mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
synced 2025-01-10 15:58:47 +00:00
mips: use nth_page() in place of direct struct page manipulation
__flush_dcache_pages() is called during hugetlb migration via migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page() -> move_to_new_folio() -> flush_dcache_folio(). And with hugetlb and without sparsemem vmemmap, struct page is not guaranteed to be contiguous beyond a section. Use nth_page() instead. Without the fix, a wrong address might be used for data cache page flush. No bug is reported. The fix comes from code inspection. Link: https://lkml.kernel.org/r/20230913201248.452081-6-zi.yan@sent.com Fixes: 15fa3e8e3269 ("mips: implement the new page table range API") Signed-off-by: Zi Yan <ziy@nvidia.com> Cc: David Hildenbrand <david@redhat.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
parent
8db0ec791f
commit
aa5fe31b6b
@ -117,7 +117,7 @@ void __flush_dcache_pages(struct page *page, unsigned int nr)
|
||||
* get faulted into the tlb (and thus flushed) anyways.
|
||||
*/
|
||||
for (i = 0; i < nr; i++) {
|
||||
addr = (unsigned long)kmap_local_page(page + i);
|
||||
addr = (unsigned long)kmap_local_page(nth_page(page, i));
|
||||
flush_data_cache_page(addr);
|
||||
kunmap_local((void *)addr);
|
||||
}
|
||||
|
Loading…
x
Reference in New Issue
Block a user