mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
synced 2024-12-28 16:52:18 +00:00
5f6170a469
Patch series "implement lightweight guard pages", v4. Userland library functions such as allocators and threading implementations often require regions of memory to act as 'guard pages' - mappings which, when accessed, result in a fatal signal being sent to the accessing process. The current means by which these are implemented is via a PROT_NONE mmap() mapping, which provides the required semantics however incur an overhead of a VMA for each such region. With a great many processes and threads, this can rapidly add up and incur a significant memory penalty. It also has the added problem of preventing merges that might otherwise be permitted. This series takes a different approach - an idea suggested by Vlastimil Babka (and before him David Hildenbrand and Jann Horn - perhaps more - the provenance becomes a little tricky to ascertain after this - please forgive any omissions!) - rather than locating the guard pages at the VMA layer, instead placing them in page tables mapping the required ranges. Early testing of the prototype version of this code suggests a 5 times speed up in memory mapping invocations (in conjunction with use of process_madvise()) and a 13% reduction in VMAs on an entirely idle android system and unoptimised code. We expect with optimisation and a loaded system with a larger number of guard pages this could significantly increase, but in any case these numbers are encouraging. This way, rather than having separate VMAs specifying which parts of a range are guard pages, instead we have a VMA spanning the entire range of memory a user is permitted to access and including ranges which are to be 'guarded'. After mapping this, a user can specify which parts of the range should result in a fatal signal when accessed. By restricting the ability to specify guard pages to memory mapped by existing VMAs, we can rely on the mappings being torn down when the mappings are ultimately unmapped and everything works simply as if the memory were not faulted in, from the point of view of the containing VMAs. This mechanism in effect poisons memory ranges similar to hardware memory poisoning, only it is an entirely software-controlled form of poisoning. The mechanism is implemented via madvise() behaviour - MADV_GUARD_INSTALL which installs page table-level guard page markers - and MADV_GUARD_REMOVE - which clears them. Guard markers can be installed across multiple VMAs and any existing mappings will be cleared, that is zapped, before installing the guard page markers in the page tables. There is no concept of 'nested' guard markers, multiple attempts to install guard markers in a range will, after the first attempt, have no effect. Importantly, removing guard markers over a range that contains both guard markers and ordinary backed memory has no effect on anything but the guard markers (including leaving huge pages un-split), so a user can safely remove guard markers over a range of memory leaving the rest intact. The actual mechanism by which the page table entries are specified makes use of existing logic - PTE markers, which are used for the userfaultfd UFFDIO_POISON mechanism. Unfortunately PTE_MARKER_POISONED is not suited for the guard page mechanism as it results in VM_FAULT_HWPOISON semantics in the fault handler, so we add our own specific PTE_MARKER_GUARD and adapt existing logic to handle it. We also extend the generic page walk mechanism to allow for installation of PTEs (carefully restricted to memory management logic only to prevent unwanted abuse). We ensure that zapping performed by MADV_DONTNEED and MADV_FREE do not remove guard markers, nor does forking (except when VM_WIPEONFORK is specified for a VMA which implies a total removal of memory characteristics). It's important to note that the guard page implementation is emphatically NOT a security feature, so a user can remove the markers if they wish. We simply implement it in such a way as to provide the least surprising behaviour. An extensive set of self-tests are provided which ensure behaviour is as expected and additionally self-documents expected behaviour of guard ranges. This patch (of 5): The existing generic pagewalk logic permits the walking of page tables, invoking callbacks at individual page table levels via user-provided mm_walk_ops callbacks. This is useful for traversing existing page table entries, but precludes the ability to establish new ones. Existing mechanism for performing a walk which also installs page table entries if necessary are heavily duplicated throughout the kernel, each with semantic differences from one another and largely unavailable for use elsewhere. Rather than add yet another implementation, we extend the generic pagewalk logic to enable the installation of page table entries by adding a new install_pte() callback in mm_walk_ops. If this is specified, then upon encountering a missing page table entry, we allocate and install a new one and continue the traversal. If a THP huge page is encountered at either the PMD or PUD level we split it only if there are ops->pte_entry() (or ops->pmd_entry at PUD level), otherwise if there is only an ops->install_pte(), we avoid the unnecessary split. We do not support hugetlb at this stage. If this function returns an error, or an allocation fails during the operation, we abort the operation altogether. It is up to the caller to deal appropriately with partially populated page table ranges. If install_pte() is defined, the semantics of pte_entry() change - this callback is then only invoked if the entry already exists. This is a useful property, as it allows a caller to handle existing PTEs while installing new ones where necessary in the specified range. If install_pte() is not defined, then there is no functional difference to this patch, so all existing logic will work precisely as it did before. As we only permit the installation of PTEs where a mapping does not already exist there is no need for TLB management, however we do invoke update_mmu_cache() for architectures which require manual maintenance of mappings for other CPUs. We explicitly do not allow the existing page walk API to expose this feature as it is dangerous and intended for internal mm use only. Therefore we provide a new walk_page_range_mm() function exposed only to mm/internal.h. We take the opportunity to additionally clean up the page walker logic to be a little easier to follow. Link: https://lkml.kernel.org/r/cover.1730123433.git.lorenzo.stoakes@oracle.com Link: https://lkml.kernel.org/r/51b432ebef013e3fdf9f92101533435de1bffadf.1730123433.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Jann Horn <jannh@google.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Suggested-by: Vlastimil Babka <vbabka@suse.cz> Suggested-by: Jann Horn <jannh@google.com> Suggested-by: David Hildenbrand <david@redhat.com> Cc: Arnd Bergmann <arnd@kernel.org> Cc: Christian Brauner <brauner@kernel.org> Cc: Christoph Hellwig <hch@infradead.org> Cc: Chris Zankel <chris@zankel.net> Cc: Helge Deller <deller@gmx.de> Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> Cc: Jeff Xu <jeffxu@chromium.org> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Liam R. Howlett <Liam.Howlett@Oracle.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Shuah Khan <skhan@linuxfoundation.org> Cc: Vlastimil Babka <vbabkba@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
204 lines
7.4 KiB
C
204 lines
7.4 KiB
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
#ifndef _LINUX_PAGEWALK_H
|
|
#define _LINUX_PAGEWALK_H
|
|
|
|
#include <linux/mm.h>
|
|
|
|
struct mm_walk;
|
|
|
|
/* Locking requirement during a page walk. */
|
|
enum page_walk_lock {
|
|
/* mmap_lock should be locked for read to stabilize the vma tree */
|
|
PGWALK_RDLOCK = 0,
|
|
/* vma will be write-locked during the walk */
|
|
PGWALK_WRLOCK = 1,
|
|
/* vma is expected to be already write-locked during the walk */
|
|
PGWALK_WRLOCK_VERIFY = 2,
|
|
};
|
|
|
|
/**
|
|
* struct mm_walk_ops - callbacks for walk_page_range
|
|
* @pgd_entry: if set, called for each non-empty PGD (top-level) entry
|
|
* @p4d_entry: if set, called for each non-empty P4D entry
|
|
* @pud_entry: if set, called for each non-empty PUD entry
|
|
* @pmd_entry: if set, called for each non-empty PMD entry
|
|
* this handler is required to be able to handle
|
|
* pmd_trans_huge() pmds. They may simply choose to
|
|
* split_huge_page() instead of handling it explicitly.
|
|
* @pte_entry: if set, called for each PTE (lowest-level) entry
|
|
* including empty ones, except if @install_pte is set.
|
|
* If @install_pte is set, @pte_entry is called only for
|
|
* existing PTEs.
|
|
* @pte_hole: if set, called for each hole at all levels,
|
|
* depth is -1 if not known, 0:PGD, 1:P4D, 2:PUD, 3:PMD.
|
|
* Any folded depths (where PTRS_PER_P?D is equal to 1)
|
|
* are skipped. If @install_pte is specified, this will
|
|
* not trigger for any populated ranges.
|
|
* @hugetlb_entry: if set, called for each hugetlb entry. This hook
|
|
* function is called with the vma lock held, in order to
|
|
* protect against a concurrent freeing of the pte_t* or
|
|
* the ptl. In some cases, the hook function needs to drop
|
|
* and retake the vma lock in order to avoid deadlocks
|
|
* while calling other functions. In such cases the hook
|
|
* function must either refrain from accessing the pte or
|
|
* ptl after dropping the vma lock, or else revalidate
|
|
* those items after re-acquiring the vma lock and before
|
|
* accessing them.
|
|
* @test_walk: caller specific callback function to determine whether
|
|
* we walk over the current vma or not. Returning 0 means
|
|
* "do page table walk over the current vma", returning
|
|
* a negative value means "abort current page table walk
|
|
* right now" and returning 1 means "skip the current vma"
|
|
* Note that this callback is not called when the caller
|
|
* passes in a single VMA as for walk_page_vma().
|
|
* @pre_vma: if set, called before starting walk on a non-null vma.
|
|
* @post_vma: if set, called after a walk on a non-null vma, provided
|
|
* that @pre_vma and the vma walk succeeded.
|
|
* @install_pte: if set, missing page table entries are installed and
|
|
* thus all levels are always walked in the specified
|
|
* range. This callback is then invoked at the PTE level
|
|
* (having split any THP pages prior), providing the PTE to
|
|
* install. If allocations fail, the walk is aborted. This
|
|
* operation is only available for userland memory. Not
|
|
* usable for hugetlb ranges.
|
|
*
|
|
* p?d_entry callbacks are called even if those levels are folded on a
|
|
* particular architecture/configuration.
|
|
*/
|
|
struct mm_walk_ops {
|
|
int (*pgd_entry)(pgd_t *pgd, unsigned long addr,
|
|
unsigned long next, struct mm_walk *walk);
|
|
int (*p4d_entry)(p4d_t *p4d, unsigned long addr,
|
|
unsigned long next, struct mm_walk *walk);
|
|
int (*pud_entry)(pud_t *pud, unsigned long addr,
|
|
unsigned long next, struct mm_walk *walk);
|
|
int (*pmd_entry)(pmd_t *pmd, unsigned long addr,
|
|
unsigned long next, struct mm_walk *walk);
|
|
int (*pte_entry)(pte_t *pte, unsigned long addr,
|
|
unsigned long next, struct mm_walk *walk);
|
|
int (*pte_hole)(unsigned long addr, unsigned long next,
|
|
int depth, struct mm_walk *walk);
|
|
int (*hugetlb_entry)(pte_t *pte, unsigned long hmask,
|
|
unsigned long addr, unsigned long next,
|
|
struct mm_walk *walk);
|
|
int (*test_walk)(unsigned long addr, unsigned long next,
|
|
struct mm_walk *walk);
|
|
int (*pre_vma)(unsigned long start, unsigned long end,
|
|
struct mm_walk *walk);
|
|
void (*post_vma)(struct mm_walk *walk);
|
|
int (*install_pte)(unsigned long addr, unsigned long next,
|
|
pte_t *ptep, struct mm_walk *walk);
|
|
enum page_walk_lock walk_lock;
|
|
};
|
|
|
|
/*
|
|
* Action for pud_entry / pmd_entry callbacks.
|
|
* ACTION_SUBTREE is the default
|
|
*/
|
|
enum page_walk_action {
|
|
/* Descend to next level, splitting huge pages if needed and possible */
|
|
ACTION_SUBTREE = 0,
|
|
/* Continue to next entry at this level (ignoring any subtree) */
|
|
ACTION_CONTINUE = 1,
|
|
/* Call again for this entry */
|
|
ACTION_AGAIN = 2
|
|
};
|
|
|
|
/**
|
|
* struct mm_walk - walk_page_range data
|
|
* @ops: operation to call during the walk
|
|
* @mm: mm_struct representing the target process of page table walk
|
|
* @pgd: pointer to PGD; only valid with no_vma (otherwise set to NULL)
|
|
* @vma: vma currently walked (NULL if walking outside vmas)
|
|
* @action: next action to perform (see enum page_walk_action)
|
|
* @no_vma: walk ignoring vmas (vma will always be NULL)
|
|
* @private: private data for callbacks' usage
|
|
*
|
|
* (see the comment on walk_page_range() for more details)
|
|
*/
|
|
struct mm_walk {
|
|
const struct mm_walk_ops *ops;
|
|
struct mm_struct *mm;
|
|
pgd_t *pgd;
|
|
struct vm_area_struct *vma;
|
|
enum page_walk_action action;
|
|
bool no_vma;
|
|
void *private;
|
|
};
|
|
|
|
int walk_page_range(struct mm_struct *mm, unsigned long start,
|
|
unsigned long end, const struct mm_walk_ops *ops,
|
|
void *private);
|
|
int walk_page_range_novma(struct mm_struct *mm, unsigned long start,
|
|
unsigned long end, const struct mm_walk_ops *ops,
|
|
pgd_t *pgd,
|
|
void *private);
|
|
int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start,
|
|
unsigned long end, const struct mm_walk_ops *ops,
|
|
void *private);
|
|
int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops,
|
|
void *private);
|
|
int walk_page_mapping(struct address_space *mapping, pgoff_t first_index,
|
|
pgoff_t nr, const struct mm_walk_ops *ops,
|
|
void *private);
|
|
|
|
typedef int __bitwise folio_walk_flags_t;
|
|
|
|
/*
|
|
* Walk migration entries as well. Careful: a large folio might get split
|
|
* concurrently.
|
|
*/
|
|
#define FW_MIGRATION ((__force folio_walk_flags_t)BIT(0))
|
|
|
|
/* Walk shared zeropages (small + huge) as well. */
|
|
#define FW_ZEROPAGE ((__force folio_walk_flags_t)BIT(1))
|
|
|
|
enum folio_walk_level {
|
|
FW_LEVEL_PTE,
|
|
FW_LEVEL_PMD,
|
|
FW_LEVEL_PUD,
|
|
};
|
|
|
|
/**
|
|
* struct folio_walk - folio_walk_start() / folio_walk_end() data
|
|
* @page: exact folio page referenced (if applicable)
|
|
* @level: page table level identifying the entry type
|
|
* @pte: pointer to the page table entry (FW_LEVEL_PTE).
|
|
* @pmd: pointer to the page table entry (FW_LEVEL_PMD).
|
|
* @pud: pointer to the page table entry (FW_LEVEL_PUD).
|
|
* @ptl: pointer to the page table lock.
|
|
*
|
|
* (see folio_walk_start() documentation for more details)
|
|
*/
|
|
struct folio_walk {
|
|
/* public */
|
|
struct page *page;
|
|
enum folio_walk_level level;
|
|
union {
|
|
pte_t *ptep;
|
|
pud_t *pudp;
|
|
pmd_t *pmdp;
|
|
};
|
|
union {
|
|
pte_t pte;
|
|
pud_t pud;
|
|
pmd_t pmd;
|
|
};
|
|
/* private */
|
|
struct vm_area_struct *vma;
|
|
spinlock_t *ptl;
|
|
};
|
|
|
|
struct folio *folio_walk_start(struct folio_walk *fw,
|
|
struct vm_area_struct *vma, unsigned long addr,
|
|
folio_walk_flags_t flags);
|
|
|
|
#define folio_walk_end(__fw, __vma) do { \
|
|
spin_unlock((__fw)->ptl); \
|
|
if (likely((__fw)->level == FW_LEVEL_PTE)) \
|
|
pte_unmap((__fw)->ptep); \
|
|
vma_pgtable_walk_end(__vma); \
|
|
} while (0)
|
|
|
|
#endif /* _LINUX_PAGEWALK_H */
|