mm-remove-an-avoidable-load-of-page-refcount-in-page_ref_add_unless-fix

add comment from David

Link: https://lkml.kernel.org/r/f5a65bf5-5105-4376-9c1c-164a15a4ab79@redhat.com
Cc: Mateusz Guzik <mjguzik@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Yu Zhao <yuzhao@google.com>
Cc: David Hildenbrand <david@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Andrew Morton 2024-12-09 19:53:02 -08:00
parent 0816a5f2db
commit 26cc2d4cbe

View File

@ -234,8 +234,15 @@ static inline bool page_ref_add_unless(struct page *page, int nr, int u)
rcu_read_lock();
/* avoid writing to the vmemmap area being remapped */
if (!page_is_fake_head(page))
if (!page_is_fake_head(page)) {
/*
* atomic_add_unless() will currently never modify the value
* if it already is u. If that ever changes, we'd have to have
* a separate check here, such that we won't be writing to
* write-protected vmemmap areas.
*/
ret = atomic_add_unless(&page->_refcount, nr, u);
}
rcu_read_unlock();
if (page_ref_tracepoint_active(page_ref_mod_unless))