2018-03-21 19:22:43 +00:00
|
|
|
===========
|
|
|
|
Userfaultfd
|
|
|
|
===========
|
|
|
|
|
|
|
|
Objective
|
|
|
|
=========
|
2015-09-04 22:46:00 +00:00
|
|
|
|
|
|
|
Userfaults allow the implementation of on-demand paging from userland
|
|
|
|
and more generally they allow userland to take control of various
|
|
|
|
memory page faults, something otherwise only the kernel code could do.
|
|
|
|
|
|
|
|
For example userfaults allows a proper and more optimal implementation
|
2020-04-14 16:48:46 +00:00
|
|
|
of the ``PROT_NONE+SIGSEGV`` trick.
|
2015-09-04 22:46:00 +00:00
|
|
|
|
2018-03-21 19:22:43 +00:00
|
|
|
Design
|
|
|
|
======
|
2015-09-04 22:46:00 +00:00
|
|
|
|
2022-08-08 17:56:13 +00:00
|
|
|
Userspace creates a new userfaultfd, initializes it, and registers one or more
|
|
|
|
regions of virtual memory with it. Then, any page faults which occur within the
|
|
|
|
region(s) result in a message being delivered to the userfaultfd, notifying
|
|
|
|
userspace of the fault.
|
2015-09-04 22:46:00 +00:00
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
The ``userfaultfd`` (aside from registering and unregistering virtual
|
2015-09-04 22:46:00 +00:00
|
|
|
memory ranges) provides two primary functionalities:
|
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
1) ``read/POLLIN`` protocol to notify a userland thread of the faults
|
2015-09-04 22:46:00 +00:00
|
|
|
happening
|
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
2) various ``UFFDIO_*`` ioctls that can manage the virtual memory regions
|
|
|
|
registered in the ``userfaultfd`` that allows userland to efficiently
|
2015-09-04 22:46:00 +00:00
|
|
|
resolve the userfaults it receives via 1) or to manage the virtual
|
|
|
|
memory in the background
|
|
|
|
|
|
|
|
The real advantage of userfaults if compared to regular virtual memory
|
|
|
|
management of mremap/mprotect is that the userfaults in all their
|
|
|
|
operations never involve heavyweight structures like vmas (in fact the
|
2020-06-09 04:33:54 +00:00
|
|
|
``userfaultfd`` runtime load never takes the mmap_lock for writing).
|
2015-09-04 22:46:00 +00:00
|
|
|
Vmas are not suitable for page- (or hugepage) granular fault tracking
|
|
|
|
when dealing with virtual address spaces that could span
|
|
|
|
Terabytes. Too many vmas would be needed for that.
|
|
|
|
|
2022-08-08 17:56:13 +00:00
|
|
|
The ``userfaultfd``, once created, can also be
|
2015-09-04 22:46:00 +00:00
|
|
|
passed using unix domain sockets to a manager process, so the same
|
|
|
|
manager process could handle the userfaults of a multitude of
|
|
|
|
different processes without them being aware about what is going on
|
2020-04-14 16:48:46 +00:00
|
|
|
(well of course unless they later try to use the ``userfaultfd``
|
2015-09-04 22:46:00 +00:00
|
|
|
themselves on the same region the manager is already tracking, which
|
2020-04-14 16:48:46 +00:00
|
|
|
is a corner case that would currently return ``-EBUSY``).
|
2015-09-04 22:46:00 +00:00
|
|
|
|
2018-03-21 19:22:43 +00:00
|
|
|
API
|
|
|
|
===
|
2015-09-04 22:46:00 +00:00
|
|
|
|
2022-08-08 17:56:13 +00:00
|
|
|
Creating a userfaultfd
|
|
|
|
----------------------
|
|
|
|
|
|
|
|
There are two ways to create a new userfaultfd, each of which provide ways to
|
|
|
|
restrict access to this functionality (since historically userfaultfds which
|
|
|
|
handle kernel page faults have been a useful tool for exploiting the kernel).
|
|
|
|
|
|
|
|
The first way, supported since userfaultfd was introduced, is the
|
|
|
|
userfaultfd(2) syscall. Access to this is controlled in several ways:
|
|
|
|
|
|
|
|
- Any user can always create a userfaultfd which traps userspace page faults
|
|
|
|
only. Such a userfaultfd can be created using the userfaultfd(2) syscall
|
|
|
|
with the flag UFFD_USER_MODE_ONLY.
|
|
|
|
|
|
|
|
- In order to also trap kernel page faults for the address space, either the
|
|
|
|
process needs the CAP_SYS_PTRACE capability, or the system must have
|
|
|
|
vm.unprivileged_userfaultfd set to 1. By default, vm.unprivileged_userfaultfd
|
|
|
|
is set to 0.
|
|
|
|
|
|
|
|
The second way, added to the kernel more recently, is by opening
|
|
|
|
/dev/userfaultfd and issuing a USERFAULTFD_IOC_NEW ioctl to it. This method
|
|
|
|
yields equivalent userfaultfds to the userfaultfd(2) syscall.
|
|
|
|
|
|
|
|
Unlike userfaultfd(2), access to /dev/userfaultfd is controlled via normal
|
|
|
|
filesystem permissions (user/group/mode), which gives fine grained access to
|
|
|
|
userfaultfd specifically, without also granting other unrelated privileges at
|
|
|
|
the same time (as e.g. granting CAP_SYS_PTRACE would do). Users who have access
|
|
|
|
to /dev/userfaultfd can always create userfaultfds that trap kernel page faults;
|
|
|
|
vm.unprivileged_userfaultfd is not considered.
|
|
|
|
|
|
|
|
Initializing a userfaultfd
|
|
|
|
--------------------------
|
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
When first opened the ``userfaultfd`` must be enabled invoking the
|
|
|
|
``UFFDIO_API`` ioctl specifying a ``uffdio_api.api`` value set to ``UFFD_API`` (or
|
|
|
|
a later API version) which will specify the ``read/POLLIN`` protocol
|
|
|
|
userland intends to speak on the ``UFFD`` and the ``uffdio_api.features``
|
|
|
|
userland requires. The ``UFFDIO_API`` ioctl if successful (i.e. if the
|
|
|
|
requested ``uffdio_api.api`` is spoken also by the running kernel and the
|
2015-09-04 22:46:37 +00:00
|
|
|
requested features are going to be enabled) will return into
|
2020-04-14 16:48:46 +00:00
|
|
|
``uffdio_api.features`` and ``uffdio_api.ioctls`` two 64bit bitmasks of
|
2015-09-04 22:46:37 +00:00
|
|
|
respectively all the available features of the read(2) protocol and
|
|
|
|
the generic ioctl available.
|
2015-09-04 22:46:00 +00:00
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
The ``uffdio_api.features`` bitmask returned by the ``UFFDIO_API`` ioctl
|
|
|
|
defines what memory types are supported by the ``userfaultfd`` and what
|
2021-05-05 01:35:53 +00:00
|
|
|
events, except page fault notifications, may be generated:
|
|
|
|
|
|
|
|
- The ``UFFD_FEATURE_EVENT_*`` flags indicate that various other events
|
|
|
|
other than page faults are supported. These events are described in more
|
|
|
|
detail below in the `Non-cooperative userfaultfd`_ section.
|
|
|
|
|
|
|
|
- ``UFFD_FEATURE_MISSING_HUGETLBFS`` and ``UFFD_FEATURE_MISSING_SHMEM``
|
|
|
|
indicate that the kernel supports ``UFFDIO_REGISTER_MODE_MISSING``
|
|
|
|
registrations for hugetlbfs and shared memory (covering all shmem APIs,
|
|
|
|
i.e. tmpfs, ``IPCSHM``, ``/dev/zero``, ``MAP_SHARED``, ``memfd_create``,
|
|
|
|
etc) virtual memory areas, respectively.
|
|
|
|
|
|
|
|
- ``UFFD_FEATURE_MINOR_HUGETLBFS`` indicates that the kernel supports
|
|
|
|
``UFFDIO_REGISTER_MODE_MINOR`` registration for hugetlbfs virtual memory
|
2021-07-01 01:49:27 +00:00
|
|
|
areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating
|
|
|
|
support for shmem virtual memory areas.
|
2021-05-05 01:35:53 +00:00
|
|
|
|
userfaultfd: UFFDIO_MOVE uABI
Implement the uABI of UFFDIO_MOVE ioctl.
UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
available (in userspace) for recycling, as is usually the case in heap
compaction algorithms, then we can avoid the page allocation and memcpy
(done by UFFDIO_COPY). Also, since the pages are recycled in the
userspace, we avoid the need to release (via madvise) the pages back to
the kernel [2].
We see over 40% reduction (on a Google pixel 6 device) in the compacting
thread's completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
measured using a benchmark that emulates a heap compaction implementation
using userfaultfd (to allow concurrent accesses by application threads).
More details of the usecase are explained in [2]. Furthermore,
UFFDIO_MOVE enables moving swapped-out pages without touching them within
the same vma. Today, it can only be done by mremap, however it forces
splitting the vma.
[1] https://lore.kernel.org/all/1425575884-2574-1-git-send-email-aarcange@redhat.com/
[2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
Update for the ioctl_userfaultfd(2) manpage:
UFFDIO_MOVE
(Since Linux xxx) Move a continuous memory chunk into the
userfault registered range and optionally wake up the blocked
thread. The source and destination addresses and the number of
bytes to move are specified by the src, dst, and len fields of
the uffdio_move structure pointed to by argp:
struct uffdio_move {
__u64 dst; /* Destination of move */
__u64 src; /* Source of move */
__u64 len; /* Number of bytes to move */
__u64 mode; /* Flags controlling behavior of move */
__s64 move; /* Number of bytes moved, or negated error */
};
The following value may be bitwise ORed in mode to change the
behavior of the UFFDIO_MOVE operation:
UFFDIO_MOVE_MODE_DONTWAKE
Do not wake up the thread that waits for page-fault
resolution
UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
Allow holes in the source virtual range that is being moved.
When not specified, the holes will result in ENOENT error.
When specified, the holes will be accounted as successfully
moved memory. This is mostly useful to move hugepage aligned
virtual regions without knowing if there are transparent
hugepages in the regions or not, but preventing the risk of
having to split the hugepage during the operation.
The move field is used by the kernel to return the number of
bytes that was actually moved, or an error (a negated errno-
style value). If the value returned in move doesn't match the
value that was specified in len, the operation fails with the
error EAGAIN. The move field is output-only; it is not read by
the UFFDIO_MOVE operation.
The operation may fail for various reasons. Usually, remapping of
pages that are not exclusive to the given process fail; once KSM
might deduplicate pages or fork() COW-shares pages during fork()
with child processes, they are no longer exclusive. Further, the
kernel might only perform lightweight checks for detecting whether
the pages are exclusive, and return -EBUSY in case that check fails.
To make the operation more likely to succeed, KSM should be
disabled, fork() should be avoided or MADV_DONTFORK should be
configured for the source VMA before fork().
This ioctl(2) operation returns 0 on success. In this case, the
entire area was moved. On error, -1 is returned and errno is
set to indicate the error. Possible errors include:
EAGAIN The number of bytes moved (i.e., the value returned in
the move field) does not equal the value that was
specified in the len field.
EINVAL Either dst or len was not a multiple of the system page
size, or the range specified by src and len or dst and len
was invalid.
EINVAL An invalid bit was specified in the mode field.
ENOENT
The source virtual memory range has unmapped holes and
UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
EEXIST
The destination virtual memory range is fully or partially
mapped.
EBUSY
The pages in the source virtual memory range are either
pinned or not exclusive to the process. The kernel might
only perform lightweight checks for detecting whether the
pages are exclusive. To make the operation more likely to
succeed, KSM should be disabled, fork() should be avoided
or MADV_DONTFORK should be configured for the source virtual
memory area before fork().
ENOMEM Allocating memory needed for the operation failed.
ESRCH
The target process has exited at the time of a UFFDIO_MOVE
operation.
Link: https://lkml.kernel.org/r/20231206103702.3873743-3-surenb@google.com
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Brian Geffon <bgeffon@google.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jann Horn <jannh@google.com>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Lokesh Gidra <lokeshgidra@google.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Nicolas Geoffray <ngeoffray@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: ZhangPeng <zhangpeng362@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-12-06 10:36:56 +00:00
|
|
|
- ``UFFD_FEATURE_MOVE`` indicates that the kernel supports moving an
|
|
|
|
existing page contents from userspace.
|
|
|
|
|
2021-05-05 01:35:53 +00:00
|
|
|
The userland application should set the feature flags it intends to use
|
|
|
|
when invoking the ``UFFDIO_API`` ioctl, to request that those features be
|
|
|
|
enabled if supported.
|
|
|
|
|
|
|
|
Once the ``userfaultfd`` API has been enabled the ``UFFDIO_REGISTER``
|
|
|
|
ioctl should be invoked (if present in the returned ``uffdio_api.ioctls``
|
|
|
|
bitmask) to register a memory range in the ``userfaultfd`` by setting the
|
2020-04-14 16:48:46 +00:00
|
|
|
uffdio_register structure accordingly. The ``uffdio_register.mode``
|
2015-09-04 22:46:00 +00:00
|
|
|
bitmask will specify to the kernel which kind of faults to track for
|
2021-05-05 01:35:53 +00:00
|
|
|
the range. The ``UFFDIO_REGISTER`` ioctl will return the
|
2020-04-14 16:48:46 +00:00
|
|
|
``uffdio_register.ioctls`` bitmask of ioctls that are suitable to resolve
|
2015-09-04 22:46:00 +00:00
|
|
|
userfaults on the range registered. Not all ioctls will necessarily be
|
2021-05-05 01:35:53 +00:00
|
|
|
supported for all memory types (e.g. anonymous memory vs. shmem vs.
|
|
|
|
hugetlbfs), or all types of intercepted faults.
|
2015-09-04 22:46:00 +00:00
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
Userland can use the ``uffdio_register.ioctls`` to manage the virtual
|
2015-09-04 22:46:00 +00:00
|
|
|
address space in the background (to add or potentially also remove
|
2020-04-14 16:48:46 +00:00
|
|
|
memory from the ``userfaultfd`` registered range). This means a userfault
|
2015-09-04 22:46:00 +00:00
|
|
|
could be triggering just before userland maps in the background the
|
|
|
|
user-faulted page.
|
|
|
|
|
2021-05-05 01:35:53 +00:00
|
|
|
Resolving Userfaults
|
|
|
|
--------------------
|
|
|
|
|
|
|
|
There are three basic ways to resolve userfaults:
|
|
|
|
|
|
|
|
- ``UFFDIO_COPY`` atomically copies some existing page contents from
|
|
|
|
userspace.
|
|
|
|
|
|
|
|
- ``UFFDIO_ZEROPAGE`` atomically zeros the new page.
|
|
|
|
|
|
|
|
- ``UFFDIO_CONTINUE`` maps an existing, previously-populated page.
|
|
|
|
|
|
|
|
These operations are atomic in the sense that they guarantee nothing can
|
|
|
|
see a half-populated page, since readers will keep userfaulting until the
|
|
|
|
operation has finished.
|
|
|
|
|
|
|
|
By default, these wake up userfaults blocked on the range in question.
|
|
|
|
They support a ``UFFDIO_*_MODE_DONTWAKE`` ``mode`` flag, which indicates
|
|
|
|
that waking will be done separately at some later time.
|
|
|
|
|
|
|
|
Which ioctl to choose depends on the kind of page fault, and what we'd
|
|
|
|
like to do to resolve it:
|
|
|
|
|
|
|
|
- For ``UFFDIO_REGISTER_MODE_MISSING`` faults, the fault needs to be
|
|
|
|
resolved by either providing a new page (``UFFDIO_COPY``), or mapping
|
|
|
|
the zero page (``UFFDIO_ZEROPAGE``). By default, the kernel would map
|
|
|
|
the zero page for a missing fault. With userfaultfd, userspace can
|
|
|
|
decide what content to provide before the faulting thread continues.
|
|
|
|
|
|
|
|
- For ``UFFDIO_REGISTER_MODE_MINOR`` faults, there is an existing page (in
|
|
|
|
the page cache). Userspace has the option of modifying the page's
|
|
|
|
contents before resolving the fault. Once the contents are correct
|
|
|
|
(modified or not), userspace asks the kernel to map the page and let the
|
|
|
|
faulting thread continue with ``UFFDIO_CONTINUE``.
|
2015-09-04 22:46:00 +00:00
|
|
|
|
2020-04-07 03:06:24 +00:00
|
|
|
Notes:
|
|
|
|
|
2021-05-05 01:35:53 +00:00
|
|
|
- You can tell which kind of fault occurred by examining
|
|
|
|
``pagefault.flags`` within the ``uffd_msg``, checking for the
|
|
|
|
``UFFD_PAGEFAULT_FLAG_*`` flags.
|
2020-04-07 03:06:24 +00:00
|
|
|
|
|
|
|
- None of the page-delivering ioctls default to the range that you
|
|
|
|
registered with. You must fill in all fields for the appropriate
|
|
|
|
ioctl struct including the range.
|
|
|
|
|
|
|
|
- You get the address of the access that triggered the missing page
|
|
|
|
event out of a struct uffd_msg that you read in the thread from the
|
2021-05-05 01:35:53 +00:00
|
|
|
uffd. You can supply as many pages as you want with these IOCTLs.
|
|
|
|
Keep in mind that unless you used DONTWAKE then the first of any of
|
|
|
|
those IOCTLs wakes up the faulting thread.
|
2020-04-07 03:06:24 +00:00
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
- Be sure to test for all errors including
|
|
|
|
(``pollfd[0].revents & POLLERR``). This can happen, e.g. when ranges
|
|
|
|
supplied were incorrect.
|
2020-04-07 03:06:24 +00:00
|
|
|
|
|
|
|
Write Protect Notifications
|
|
|
|
---------------------------
|
|
|
|
|
|
|
|
This is equivalent to (but faster than) using mprotect and a SIGSEGV
|
|
|
|
signal handler.
|
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
Firstly you need to register a range with ``UFFDIO_REGISTER_MODE_WP``.
|
|
|
|
Instead of using mprotect(2) you use
|
|
|
|
``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)``
|
|
|
|
while ``mode = UFFDIO_WRITEPROTECT_MODE_WP``
|
2020-04-07 03:06:24 +00:00
|
|
|
in the struct passed in. The range does not default to and does not
|
|
|
|
have to be identical to the range you registered with. You can write
|
|
|
|
protect as many ranges as you like (inside the registered range).
|
|
|
|
Then, in the thread reading from uffd the struct will have
|
2020-04-14 16:48:46 +00:00
|
|
|
``msg.arg.pagefault.flags & UFFD_PAGEFAULT_FLAG_WP`` set. Now you send
|
|
|
|
``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)``
|
|
|
|
again while ``pagefault.mode`` does not have ``UFFDIO_WRITEPROTECT_MODE_WP``
|
|
|
|
set. This wakes up the thread which will continue to run with writes. This
|
2020-04-07 03:06:24 +00:00
|
|
|
allows you to do the bookkeeping about the write in the uffd reading
|
|
|
|
thread before the ioctl.
|
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
If you registered with both ``UFFDIO_REGISTER_MODE_MISSING`` and
|
|
|
|
``UFFDIO_REGISTER_MODE_WP`` then you need to think about the sequence in
|
2020-04-07 03:06:24 +00:00
|
|
|
which you supply a page and undo write protect. Note that there is a
|
|
|
|
difference between writes into a WP area and into a !WP area. The
|
2020-04-14 16:48:46 +00:00
|
|
|
former will have ``UFFD_PAGEFAULT_FLAG_WP`` set, the latter
|
|
|
|
``UFFD_PAGEFAULT_FLAG_WRITE``. The latter did not fail on protection but
|
|
|
|
you still need to supply a page when ``UFFDIO_REGISTER_MODE_MISSING`` was
|
2020-04-07 03:06:24 +00:00
|
|
|
used.
|
|
|
|
|
mm/uffd: UFFD_FEATURE_WP_UNPOPULATED
Patch series "mm/uffd: Add feature bit UFFD_FEATURE_WP_UNPOPULATED", v4.
The new feature bit makes anonymous memory acts the same as file memory on
userfaultfd-wp in that it'll also wr-protect none ptes.
It can be useful in two cases:
(1) Uffd-wp app that needs to wr-protect none ptes like QEMU snapshot,
so pre-fault can be replaced by enabling this flag and speed up
protections
(2) It helps to implement async uffd-wp mode that Muhammad is working on [1]
It's debatable whether this is the most ideal solution because with the
new feature bit set, wr-protect none pte needs to pre-populate the
pgtables to the last level (PAGE_SIZE). But it seems fine so far to
service either purpose above, so we can leave optimizations for later.
The series brings pte markers to anonymous memory too. There's some
change in the common mm code path in the 1st patch, great to have some eye
looking at it, but hopefully they're still relatively straightforward.
This patch (of 2):
This is a new feature that controls how uffd-wp handles none ptes. When
it's set, the kernel will handle anonymous memory the same way as file
memory, by allowing the user to wr-protect unpopulated ptes.
File memories handles none ptes consistently by allowing wr-protecting of
none ptes because of the unawareness of page cache being exist or not.
For anonymous it was not as persistent because we used to assume that we
don't need protections on none ptes or known zero pages.
One use case of such a feature bit was VM live snapshot, where if without
wr-protecting empty ptes the snapshot can contain random rubbish in the
holes of the anonymous memory, which can cause misbehave of the guest when
the guest OS assumes the pages should be all zeros.
QEMU worked it around by pre-populate the section with reads to fill in
zero page entries before starting the whole snapshot process [1].
Recently there's another need raised on using userfaultfd wr-protect for
detecting dirty pages (to replace soft-dirty in some cases) [2]. In that
case if without being able to wr-protect none ptes by default, the dirty
info can get lost, since we cannot treat every none pte to be dirty (the
current design is identify a page dirty based on uffd-wp bit being
cleared).
In general, we want to be able to wr-protect empty ptes too even for
anonymous.
This patch implements UFFD_FEATURE_WP_UNPOPULATED so that it'll make
uffd-wp handling on none ptes being consistent no matter what the memory
type is underneath. It doesn't have any impact on file memories so far
because we already have pte markers taking care of that. So it only
affects anonymous.
The feature bit is by default off, so the old behavior will be maintained.
Sometimes it may be wanted because the wr-protect of none ptes will
contain overheads not only during UFFDIO_WRITEPROTECT (by applying pte
markers to anonymous), but also on creating the pgtables to store the pte
markers. So there's potentially less chance of using thp on the first
fault for a none pmd or larger than a pmd.
The major implementation part is teaching the whole kernel to understand
pte markers even for anonymously mapped ranges, meanwhile allowing the
UFFDIO_WRITEPROTECT ioctl to apply pte markers for anonymous too when the
new feature bit is set.
Note that even if the patch subject starts with mm/uffd, there're a few
small refactors to major mm path of handling anonymous page faults. But
they should be straightforward.
With WP_UNPOPUATED, application like QEMU can avoid pre-read faults all
the memory before wr-protect during taking a live snapshot. Quotting from
Muhammad's test result here [3] based on a simple program [4]:
(1) With huge page disabled
echo madvise > /sys/kernel/mm/transparent_hugepage/enabled
./uffd_wp_perf
Test DEFAULT: 4
Test PRE-READ: 1111453 (pre-fault 1101011)
Test MADVISE: 278276 (pre-fault 266378)
Test WP-UNPOPULATE: 11712
(2) With Huge page enabled
echo always > /sys/kernel/mm/transparent_hugepage/enabled
./uffd_wp_perf
Test DEFAULT: 4
Test PRE-READ: 22521 (pre-fault 22348)
Test MADVISE: 4909 (pre-fault 4743)
Test WP-UNPOPULATE: 14448
There'll be a great perf boost for no-thp case, while for thp enabled with
extreme case of all-thp-zero WP_UNPOPULATED can be slower than MADVISE,
but that's low possibility in reality, also the overhead was not reduced
but postponed until a follow up write on any huge zero thp, so potentially
it is faster by making the follow up writes slower.
[1] https://lore.kernel.org/all/20210401092226.102804-4-andrey.gruzdev@virtuozzo.com/
[2] https://lore.kernel.org/all/Y+v2HJ8+3i%2FKzDBu@x1n/
[3] https://lore.kernel.org/all/d0eb0a13-16dc-1ac1-653a-78b7273781e3@collabora.com/
[4] https://github.com/xzpeter/clibs/blob/master/uffd-test/uffd-wp-perf.c
[peterx@redhat.com: comment changes, oneliner fix to khugepaged]
Link: https://lkml.kernel.org/r/ZB2/8jPhD3fpx5U8@x1n
Link: https://lkml.kernel.org/r/20230309223711.823547-1-peterx@redhat.com
Link: https://lkml.kernel.org/r/20230309223711.823547-2-peterx@redhat.com
Signed-off-by: Peter Xu <peterx@redhat.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Muhammad Usama Anjum <usama.anjum@collabora.com>
Cc: Nadav Amit <nadav.amit@gmail.com>
Cc: Paul Gofman <pgofman@codeweavers.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-03-09 22:37:10 +00:00
|
|
|
Userfaultfd write-protect mode currently behave differently on none ptes
|
|
|
|
(when e.g. page is missing) over different types of memories.
|
|
|
|
|
|
|
|
For anonymous memory, ``ioctl(UFFDIO_WRITEPROTECT)`` will ignore none ptes
|
|
|
|
(e.g. when pages are missing and not populated). For file-backed memories
|
|
|
|
like shmem and hugetlbfs, none ptes will be write protected just like a
|
|
|
|
present pte. In other words, there will be a userfaultfd write fault
|
|
|
|
message generated when writing to a missing page on file typed memories,
|
|
|
|
as long as the page range was write-protected before. Such a message will
|
|
|
|
not be generated on anonymous memories by default.
|
|
|
|
|
|
|
|
If the application wants to be able to write protect none ptes on anonymous
|
|
|
|
memory, one can pre-populate the memory with e.g. MADV_POPULATE_READ. On
|
|
|
|
newer kernels, one can also detect the feature UFFD_FEATURE_WP_UNPOPULATED
|
|
|
|
and set the feature bit in advance to make sure none ptes will also be
|
|
|
|
write protected even upon anonymous memory.
|
|
|
|
|
2023-03-14 22:12:50 +00:00
|
|
|
When using ``UFFDIO_REGISTER_MODE_WP`` in combination with either
|
|
|
|
``UFFDIO_REGISTER_MODE_MISSING`` or ``UFFDIO_REGISTER_MODE_MINOR``, when
|
|
|
|
resolving missing / minor faults with ``UFFDIO_COPY`` or ``UFFDIO_CONTINUE``
|
|
|
|
respectively, it may be desirable for the new page / mapping to be
|
|
|
|
write-protected (so future writes will also result in a WP fault). These ioctls
|
|
|
|
support a mode flag (``UFFDIO_COPY_MODE_WP`` or ``UFFDIO_CONTINUE_MODE_WP``
|
|
|
|
respectively) to configure the mapping this way.
|
|
|
|
|
userfaultfd: UFFD_FEATURE_WP_ASYNC
Patch series "Implement IOCTL to get and optionally clear info about
PTEs", v33.
*Motivation*
The real motivation for adding PAGEMAP_SCAN IOCTL is to emulate Windows
GetWriteWatch() and ResetWriteWatch() syscalls [1]. The GetWriteWatch()
retrieves the addresses of the pages that are written to in a region of
virtual memory.
This syscall is used in Windows applications and games etc. This syscall
is being emulated in pretty slow manner in userspace. Our purpose is to
enhance the kernel such that we translate it efficiently in a better way.
Currently some out of tree hack patches are being used to efficiently
emulate it in some kernels. We intend to replace those with these
patches. So the whole gaming on Linux can effectively get benefit from
this. It means there would be tons of users of this code.
CRIU use case [2] was mentioned by Andrei and Danylo:
> Use cases for migrating sparse VMAs are binaries sanitized with ASAN,
> MSAN or TSAN [3]. All of these sanitizers produce sparse mappings of
> shadow memory [4]. Being able to migrate such binaries allows to highly
> reduce the amount of work needed to identify and fix post-migration
> crashes, which happen constantly.
Andrei defines the following uses of this code:
* it is more granular and allows us to track changed pages more
effectively. The current interface can clear dirty bits for the entire
process only. In addition, reading info about pages is a separate
operation. It means we must freeze the process to read information
about all its pages, reset dirty bits, only then we can start dumping
pages. The information about pages becomes more and more outdated,
while we are processing pages. The new interface solves both these
downsides. First, it allows us to read pte bits and clear the
soft-dirty bit atomically. It means that CRIU will not need to freeze
processes to pre-dump their memory. Second, it clears soft-dirty bits
for a specified region of memory. It means CRIU will have actual info
about pages to the moment of dumping them.
* The new interface has to be much faster because basic page filtering
is happening in the kernel. With the old interface, we have to read
pagemap for each page.
*Implementation Evolution (Short Summary)*
From the definition of GetWriteWatch(), we feel like kernel's soft-dirty
feature can be used under the hood with some additions like:
* reset soft-dirty flag for only a specific region of memory instead of
clearing the flag for the entire process
* get and clear soft-dirty flag for a specific region atomically
So we decided to use ioctl on pagemap file to read or/and reset soft-dirty
flag. But using soft-dirty flag, sometimes we get extra pages which weren't
even written. They had become soft-dirty because of VMA merging and
VM_SOFTDIRTY flag. This breaks the definition of GetWriteWatch(). We were
able to by-pass this short coming by ignoring VM_SOFTDIRTY until David
reported that mprotect etc messes up the soft-dirty flag while ignoring
VM_SOFTDIRTY [5]. This wasn't happening until [6] got introduced. We
discussed if we can revert these patches. But we could not reach to any
conclusion. So at this point, I made couple of tries to solve this whole
VM_SOFTDIRTY issue by correcting the soft-dirty implementation:
* [7] Correct the bug fixed wrongly back in 2014. It had potential to cause
regression. We left it behind.
* [8] Keep a list of soft-dirty part of a VMA across splits and merges. I
got the reply don't increase the size of the VMA by 8 bytes.
At this point, we left soft-dirty considering it is too much delicate and
userfaultfd [9] seemed like the only way forward. From there onward, we
have been basing soft-dirty emulation on userfaultfd wp feature where
kernel resolves the faults itself when WP_ASYNC feature is used. It was
straight forward to add WP_ASYNC feature in userfautlfd. Now we get only
those pages dirty or written-to which are really written in reality. (PS
There is another WP_UNPOPULATED userfautfd feature is required which is
needed to avoid pre-faulting memory before write-protecting [9].)
All the different masks were added on the request of CRIU devs to create
interface more generic and better.
[1] https://learn.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-getwritewatch
[2] https://lore.kernel.org/all/20221014134802.1361436-1-mdanylo@google.com
[3] https://github.com/google/sanitizers
[4] https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#64-bit
[5] https://lore.kernel.org/all/bfcae708-db21-04b4-0bbe-712badd03071@redhat.com
[6] https://lore.kernel.org/all/20220725142048.30450-1-peterx@redhat.com/
[7] https://lore.kernel.org/all/20221122115007.2787017-1-usama.anjum@collabora.com
[8] https://lore.kernel.org/all/20221220162606.1595355-1-usama.anjum@collabora.com
[9] https://lore.kernel.org/all/20230306213925.617814-1-peterx@redhat.com
[10] https://lore.kernel.org/all/20230125144529.1630917-1-mdanylo@google.com
This patch (of 6):
Add a new userfaultfd-wp feature UFFD_FEATURE_WP_ASYNC, that allows
userfaultfd wr-protect faults to be resolved by the kernel directly.
It can be used like a high accuracy version of soft-dirty, without vma
modifications during tracking, and also with ranged support by default
rather than for a whole mm when reset the protections due to existence of
ioctl(UFFDIO_WRITEPROTECT).
Several goals of such a dirty tracking interface:
1. All types of memory should be supported and tracable. This is nature
for soft-dirty but should mention when the context is userfaultfd,
because it used to only support anon/shmem/hugetlb. The problem is for
a dirty tracking purpose these three types may not be enough, and it's
legal to track anything e.g. any page cache writes from mmap.
2. Protections can be applied to partial of a memory range, without vma
split/merge fuss. The hope is that the tracking itself should not
affect any vma layout change. It also helps when reset happens because
the reset will not need mmap write lock which can block the tracee.
3. Accuracy needs to be maintained. This means we need pte markers to work
on any type of VMA.
One could question that, the whole concept of async dirty tracking is not
really close to fundamentally what userfaultfd used to be: it's not "a
fault to be serviced by userspace" anymore. However, using userfaultfd-wp
here as a framework is convenient for us in at least:
1. VM_UFFD_WP vma flag, which has a very good name to suite something like
this, so we don't need VM_YET_ANOTHER_SOFT_DIRTY. Just use a new
feature bit to identify from a sync version of uffd-wp registration.
2. PTE markers logic can be leveraged across the whole kernel to maintain
the uffd-wp bit as long as an arch supports, this also applies to this
case where uffd-wp bit will be a hint to dirty information and it will
not go lost easily (e.g. when some page cache ptes got zapped).
3. Reuse ioctl(UFFDIO_WRITEPROTECT) interface for either starting or
resetting a range of memory, while there's no counterpart in the old
soft-dirty world, hence if this is wanted in a new design we'll need a
new interface otherwise.
We can somehow understand that commonality because uffd-wp was
fundamentally a similar idea of write-protecting pages just like
soft-dirty.
This implementation allows WP_ASYNC to imply WP_UNPOPULATED, because so
far WP_ASYNC seems to not usable if without WP_UNPOPULATE. This also
gives us chance to modify impl of WP_ASYNC just in case it could be not
depending on WP_UNPOPULATED anymore in the future kernels. It's also fine
to imply that because both features will rely on PTE_MARKER_UFFD_WP config
option, so they'll show up together (or both missing) in an UFFDIO_API
probe.
vma_can_userfault() now allows any VMA if the userfaultfd registration is
only about async uffd-wp. So we can track dirty for all kinds of memory
including generic file systems (like XFS, EXT4 or BTRFS).
One trick worth mention in do_wp_page() is that we need to manually update
vmf->orig_pte here because it can be used later with a pte_same() check -
this path always has FAULT_FLAG_ORIG_PTE_VALID set in the flags.
The major defect of this approach of dirty tracking is we need to populate
the pgtables when tracking starts. Soft-dirty doesn't do it like that.
It's unwanted in the case where the range of memory to track is huge and
unpopulated (e.g., tracking updates on a 10G file with mmap() on top,
without having any page cache installed yet). One way to improve this is
to allow pte markers exist for larger than PTE level for PMD+. That will
not change the interface if to implemented, so we can leave that for
later.
Link: https://lkml.kernel.org/r/20230821141518.870589-1-usama.anjum@collabora.com
Link: https://lkml.kernel.org/r/20230821141518.870589-2-usama.anjum@collabora.com
Signed-off-by: Peter Xu <peterx@redhat.com>
Co-developed-by: Muhammad Usama Anjum <usama.anjum@collabora.com>
Signed-off-by: Muhammad Usama Anjum <usama.anjum@collabora.com>
Cc: Alex Sierra <alex.sierra@amd.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrei Vagin <avagin@gmail.com>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Cyrill Gorcunov <gorcunov@gmail.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Gustavo A. R. Silva <gustavoars@kernel.org>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Miroslaw <emmir@google.com>
Cc: Mike Rapoport (IBM) <rppt@kernel.org>
Cc: Nadav Amit <namit@vmware.com>
Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Paul Gofman <pgofman@codeweavers.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Yun Zhou <yun.zhou@windriver.com>
Cc: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-21 14:15:13 +00:00
|
|
|
If the userfaultfd context has ``UFFD_FEATURE_WP_ASYNC`` feature bit set,
|
|
|
|
any vma registered with write-protection will work in async mode rather
|
|
|
|
than the default sync mode.
|
|
|
|
|
|
|
|
In async mode, there will be no message generated when a write operation
|
|
|
|
happens, meanwhile the write-protection will be resolved automatically by
|
|
|
|
the kernel. It can be seen as a more accurate version of soft-dirty
|
|
|
|
tracking and it can be different in a few ways:
|
|
|
|
|
|
|
|
- The dirty result will not be affected by vma changes (e.g. vma
|
|
|
|
merging) because the dirty is only tracked by the pte.
|
|
|
|
|
|
|
|
- It supports range operations by default, so one can enable tracking on
|
|
|
|
any range of memory as long as page aligned.
|
|
|
|
|
|
|
|
- Dirty information will not get lost if the pte was zapped due to
|
|
|
|
various reasons (e.g. during split of a shmem transparent huge page).
|
|
|
|
|
|
|
|
- Due to a reverted meaning of soft-dirty (page clean when uffd-wp bit
|
|
|
|
set; dirty when uffd-wp bit cleared), it has different semantics on
|
|
|
|
some of the memory operations. For example: ``MADV_DONTNEED`` on
|
|
|
|
anonymous (or ``MADV_REMOVE`` on a file mapping) will be treated as
|
|
|
|
dirtying of memory by dropping uffd-wp bit during the procedure.
|
|
|
|
|
|
|
|
The user app can collect the "written/dirty" status by looking up the
|
|
|
|
uffd-wp bit for the pages being interested in /proc/pagemap.
|
|
|
|
|
|
|
|
The page will not be under track of uffd-wp async mode until the page is
|
|
|
|
explicitly write-protected by ``ioctl(UFFDIO_WRITEPROTECT)`` with the mode
|
|
|
|
flag ``UFFDIO_WRITEPROTECT_MODE_WP`` set. Trying to resolve a page fault
|
|
|
|
that was tracked by async mode userfaultfd-wp is invalid.
|
|
|
|
|
|
|
|
When userfaultfd-wp async mode is used alone, it can be applied to all
|
|
|
|
kinds of memory.
|
|
|
|
|
2023-07-07 21:55:38 +00:00
|
|
|
Memory Poisioning Emulation
|
|
|
|
---------------------------
|
|
|
|
|
|
|
|
In response to a fault (either missing or minor), an action userspace can
|
|
|
|
take to "resolve" it is to issue a ``UFFDIO_POISON``. This will cause any
|
|
|
|
future faulters to either get a SIGBUS, or in KVM's case the guest will
|
|
|
|
receive an MCE as if there were hardware memory poisoning.
|
|
|
|
|
|
|
|
This is used to emulate hardware memory poisoning. Imagine a VM running on a
|
|
|
|
machine which experiences a real hardware memory error. Later, we live migrate
|
|
|
|
the VM to another physical machine. Since we want the migration to be
|
|
|
|
transparent to the guest, we want that same address range to act as if it was
|
|
|
|
still poisoned, even though it's on a new physical host which ostensibly
|
|
|
|
doesn't have a memory error in the exact same spot.
|
|
|
|
|
2018-03-21 19:22:43 +00:00
|
|
|
QEMU/KVM
|
|
|
|
========
|
2015-09-04 22:46:00 +00:00
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
QEMU/KVM is using the ``userfaultfd`` syscall to implement postcopy live
|
2015-09-04 22:46:00 +00:00
|
|
|
migration. Postcopy live migration is one form of memory
|
|
|
|
externalization consisting of a virtual machine running with part or
|
|
|
|
all of its memory residing on a different node in the cloud. The
|
2020-04-14 16:48:46 +00:00
|
|
|
``userfaultfd`` abstraction is generic enough that not a single line of
|
2015-09-04 22:46:00 +00:00
|
|
|
KVM kernel code had to be modified in order to add postcopy live
|
|
|
|
migration to QEMU.
|
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
Guest async page faults, ``FOLL_NOWAIT`` and all other ``GUP*`` features work
|
2015-09-04 22:46:00 +00:00
|
|
|
just fine in combination with userfaults. Userfaults trigger async
|
|
|
|
page faults in the guest scheduler so those guest processes that
|
|
|
|
aren't waiting for userfaults (i.e. network bound) can keep running in
|
|
|
|
the guest vcpus.
|
|
|
|
|
|
|
|
It is generally beneficial to run one pass of precopy live migration
|
|
|
|
just before starting postcopy live migration, in order to avoid
|
|
|
|
generating userfaults for readonly guest regions.
|
|
|
|
|
|
|
|
The implementation of postcopy live migration currently uses one
|
|
|
|
single bidirectional socket but in the future two different sockets
|
|
|
|
will be used (to reduce the latency of the userfaults to the minimum
|
2020-04-14 16:48:46 +00:00
|
|
|
possible without having to decrease ``/proc/sys/net/ipv4/tcp_wmem``).
|
2015-09-04 22:46:00 +00:00
|
|
|
|
|
|
|
The QEMU in the source node writes all pages that it knows are missing
|
|
|
|
in the destination node, into the socket, and the migration thread of
|
2020-04-14 16:48:46 +00:00
|
|
|
the QEMU running in the destination node runs ``UFFDIO_COPY|ZEROPAGE``
|
|
|
|
ioctls on the ``userfaultfd`` in order to map the received pages into the
|
|
|
|
guest (``UFFDIO_ZEROCOPY`` is used if the source page was a zero page).
|
2015-09-04 22:46:00 +00:00
|
|
|
|
|
|
|
A different postcopy thread in the destination node listens with
|
2020-04-14 16:48:46 +00:00
|
|
|
poll() to the ``userfaultfd`` in parallel. When a ``POLLIN`` event is
|
2015-09-04 22:46:00 +00:00
|
|
|
generated after a userfault triggers, the postcopy thread read() from
|
2020-04-14 16:48:46 +00:00
|
|
|
the ``userfaultfd`` and receives the fault address (or ``-EAGAIN`` in case the
|
|
|
|
userfault was already resolved and waken by a ``UFFDIO_COPY|ZEROPAGE`` run
|
2015-09-04 22:46:00 +00:00
|
|
|
by the parallel QEMU migration thread).
|
|
|
|
|
|
|
|
After the QEMU postcopy thread (running in the destination node) gets
|
|
|
|
the userfault address it writes the information about the missing page
|
|
|
|
into the socket. The QEMU source node receives the information and
|
|
|
|
roughly "seeks" to that page address and continues sending all
|
|
|
|
remaining missing pages from that new page offset. Soon after that
|
|
|
|
(just the time to flush the tcp_wmem queue through the network) the
|
|
|
|
migration thread in the QEMU running in the destination node will
|
|
|
|
receive the page that triggered the userfault and it'll map it as
|
2020-04-14 16:48:46 +00:00
|
|
|
usual with the ``UFFDIO_COPY|ZEROPAGE`` (without actually knowing if it
|
2015-09-04 22:46:00 +00:00
|
|
|
was spontaneously sent by the source or if it was an urgent page
|
2017-02-27 22:28:47 +00:00
|
|
|
requested through a userfault).
|
2015-09-04 22:46:00 +00:00
|
|
|
|
|
|
|
By the time the userfaults start, the QEMU in the destination node
|
|
|
|
doesn't need to keep any per-page state bitmap relative to the live
|
|
|
|
migration around and a single per-page bitmap has to be maintained in
|
|
|
|
the QEMU running in the source node to know which pages are still
|
|
|
|
missing in the destination node. The bitmap in the source node is
|
|
|
|
checked to find which missing pages to send in round robin and we seek
|
|
|
|
over it when receiving incoming userfaults. After sending each page of
|
|
|
|
course the bitmap is updated accordingly. It's also useful to avoid
|
|
|
|
sending the same page twice (in case the userfault is read by the
|
2020-04-14 16:48:46 +00:00
|
|
|
postcopy thread just before ``UFFDIO_COPY|ZEROPAGE`` runs in the migration
|
2015-09-04 22:46:00 +00:00
|
|
|
thread).
|
2017-02-24 22:58:34 +00:00
|
|
|
|
2018-03-21 19:22:43 +00:00
|
|
|
Non-cooperative userfaultfd
|
|
|
|
===========================
|
2017-02-24 22:58:34 +00:00
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
When the ``userfaultfd`` is monitored by an external manager, the manager
|
2017-02-24 22:58:34 +00:00
|
|
|
must be able to track changes in the process virtual memory
|
|
|
|
layout. Userfaultfd can notify the manager about such changes using
|
|
|
|
the same read(2) protocol as for the page fault notifications. The
|
|
|
|
manager has to explicitly enable these events by setting appropriate
|
2020-04-14 16:48:46 +00:00
|
|
|
bits in ``uffdio_api.features`` passed to ``UFFDIO_API`` ioctl:
|
2017-02-24 22:58:34 +00:00
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
``UFFD_FEATURE_EVENT_FORK``
|
|
|
|
enable ``userfaultfd`` hooks for fork(). When this feature is
|
|
|
|
enabled, the ``userfaultfd`` context of the parent process is
|
2018-03-21 19:22:43 +00:00
|
|
|
duplicated into the newly created process. The manager
|
2020-04-14 16:48:46 +00:00
|
|
|
receives ``UFFD_EVENT_FORK`` with file descriptor of the new
|
|
|
|
``userfaultfd`` context in the ``uffd_msg.fork``.
|
2018-03-21 19:22:43 +00:00
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
``UFFD_FEATURE_EVENT_REMAP``
|
2018-03-21 19:22:43 +00:00
|
|
|
enable notifications about mremap() calls. When the
|
|
|
|
non-cooperative process moves a virtual memory area to a
|
|
|
|
different location, the manager will receive
|
2020-04-14 16:48:46 +00:00
|
|
|
``UFFD_EVENT_REMAP``. The ``uffd_msg.remap`` will contain the old and
|
2018-03-21 19:22:43 +00:00
|
|
|
new addresses of the area and its original length.
|
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
``UFFD_FEATURE_EVENT_REMOVE``
|
2018-03-21 19:22:43 +00:00
|
|
|
enable notifications about madvise(MADV_REMOVE) and
|
2020-04-14 16:48:46 +00:00
|
|
|
madvise(MADV_DONTNEED) calls. The event ``UFFD_EVENT_REMOVE`` will
|
|
|
|
be generated upon these calls to madvise(). The ``uffd_msg.remove``
|
2018-03-21 19:22:43 +00:00
|
|
|
will contain start and end addresses of the removed area.
|
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
``UFFD_FEATURE_EVENT_UNMAP``
|
2018-03-21 19:22:43 +00:00
|
|
|
enable notifications about memory unmapping. The manager will
|
2020-04-14 16:48:46 +00:00
|
|
|
get ``UFFD_EVENT_UNMAP`` with ``uffd_msg.remove`` containing start and
|
2018-03-21 19:22:43 +00:00
|
|
|
end addresses of the unmapped area.
|
2017-02-24 22:58:34 +00:00
|
|
|
|
2020-04-14 16:48:46 +00:00
|
|
|
Although the ``UFFD_FEATURE_EVENT_REMOVE`` and ``UFFD_FEATURE_EVENT_UNMAP``
|
2017-02-24 22:58:34 +00:00
|
|
|
are pretty similar, they quite differ in the action expected from the
|
2020-04-14 16:48:46 +00:00
|
|
|
``userfaultfd`` manager. In the former case, the virtual memory is
|
2017-02-24 22:58:34 +00:00
|
|
|
removed, but the area is not, the area remains monitored by the
|
2020-04-14 16:48:46 +00:00
|
|
|
``userfaultfd``, and if a page fault occurs in that area it will be
|
2017-02-24 22:58:34 +00:00
|
|
|
delivered to the manager. The proper resolution for such page fault is
|
|
|
|
to zeromap the faulting address. However, in the latter case, when an
|
|
|
|
area is unmapped, either explicitly (with munmap() system call), or
|
|
|
|
implicitly (e.g. during mremap()), the area is removed and in turn the
|
2020-04-14 16:48:46 +00:00
|
|
|
``userfaultfd`` context for such area disappears too and the manager will
|
2017-02-24 22:58:34 +00:00
|
|
|
not get further userland page faults from the removed area. Still, the
|
|
|
|
notification is required in order to prevent manager from using
|
2020-04-14 16:48:46 +00:00
|
|
|
``UFFDIO_COPY`` on the unmapped area.
|
2017-02-24 22:58:34 +00:00
|
|
|
|
|
|
|
Unlike userland page faults which have to be synchronous and require
|
|
|
|
explicit or implicit wakeup, all the events are delivered
|
|
|
|
asynchronously and the non-cooperative process resumes execution as
|
2020-04-14 16:48:46 +00:00
|
|
|
soon as manager executes read(). The ``userfaultfd`` manager should
|
|
|
|
carefully synchronize calls to ``UFFDIO_COPY`` with the events
|
|
|
|
processing. To aid the synchronization, the ``UFFDIO_COPY`` ioctl will
|
|
|
|
return ``-ENOSPC`` when the monitored process exits at the time of
|
|
|
|
``UFFDIO_COPY``, and ``-ENOENT``, when the non-cooperative process has changed
|
|
|
|
its virtual memory layout simultaneously with outstanding ``UFFDIO_COPY``
|
2017-02-24 22:58:34 +00:00
|
|
|
operation.
|
|
|
|
|
|
|
|
The current asynchronous model of the event delivery is optimal for
|
2020-04-14 16:48:46 +00:00
|
|
|
single threaded non-cooperative ``userfaultfd`` manager implementations. A
|
2017-02-24 22:58:34 +00:00
|
|
|
synchronous event delivery model can be added later as a new
|
2020-04-14 16:48:46 +00:00
|
|
|
``userfaultfd`` feature to facilitate multithreading enhancements of the
|
|
|
|
non cooperative manager, for example to allow ``UFFDIO_COPY`` ioctls to
|
2017-02-24 22:58:34 +00:00
|
|
|
run in parallel to the event reception. Single threaded
|
|
|
|
implementations should continue to use the current async event
|
|
|
|
delivery model instead.
|