2019-05-27 06:55:01 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
2006-09-30 18:52:18 +00:00
|
|
|
/* fs/ internal definitions
|
|
|
|
*
|
|
|
|
* Copyright (C) 2006 Red Hat, Inc. All Rights Reserved.
|
|
|
|
* Written by David Howells (dhowells@redhat.com)
|
|
|
|
*/
|
|
|
|
|
2006-08-31 10:55:23 +00:00
|
|
|
struct super_block;
|
2011-03-18 02:08:28 +00:00
|
|
|
struct file_system_type;
|
2016-06-20 23:23:11 +00:00
|
|
|
struct iomap;
|
2016-09-19 01:24:49 +00:00
|
|
|
struct iomap_ops;
|
CRED: Make execve() take advantage of copy-on-write credentials
Make execve() take advantage of copy-on-write credentials, allowing it to set
up the credentials in advance, and then commit the whole lot after the point
of no return.
This patch and the preceding patches have been tested with the LTP SELinux
testsuite.
This patch makes several logical sets of alteration:
(1) execve().
The credential bits from struct linux_binprm are, for the most part,
replaced with a single credentials pointer (bprm->cred). This means that
all the creds can be calculated in advance and then applied at the point
of no return with no possibility of failure.
I would like to replace bprm->cap_effective with:
cap_isclear(bprm->cap_effective)
but this seems impossible due to special behaviour for processes of pid 1
(they always retain their parent's capability masks where normally they'd
be changed - see cap_bprm_set_creds()).
The following sequence of events now happens:
(a) At the start of do_execve, the current task's cred_exec_mutex is
locked to prevent PTRACE_ATTACH from obsoleting the calculation of
creds that we make.
(a) prepare_exec_creds() is then called to make a copy of the current
task's credentials and prepare it. This copy is then assigned to
bprm->cred.
This renders security_bprm_alloc() and security_bprm_free()
unnecessary, and so they've been removed.
(b) The determination of unsafe execution is now performed immediately
after (a) rather than later on in the code. The result is stored in
bprm->unsafe for future reference.
(c) prepare_binprm() is called, possibly multiple times.
(i) This applies the result of set[ug]id binaries to the new creds
attached to bprm->cred. Personality bit clearance is recorded,
but now deferred on the basis that the exec procedure may yet
fail.
(ii) This then calls the new security_bprm_set_creds(). This should
calculate the new LSM and capability credentials into *bprm->cred.
This folds together security_bprm_set() and parts of
security_bprm_apply_creds() (these two have been removed).
Anything that might fail must be done at this point.
(iii) bprm->cred_prepared is set to 1.
bprm->cred_prepared is 0 on the first pass of the security
calculations, and 1 on all subsequent passes. This allows SELinux
in (ii) to base its calculations only on the initial script and
not on the interpreter.
(d) flush_old_exec() is called to commit the task to execution. This
performs the following steps with regard to credentials:
(i) Clear pdeath_signal and set dumpable on certain circumstances that
may not be covered by commit_creds().
(ii) Clear any bits in current->personality that were deferred from
(c.i).
(e) install_exec_creds() [compute_creds() as was] is called to install the
new credentials. This performs the following steps with regard to
credentials:
(i) Calls security_bprm_committing_creds() to apply any security
requirements, such as flushing unauthorised files in SELinux, that
must be done before the credentials are changed.
This is made up of bits of security_bprm_apply_creds() and
security_bprm_post_apply_creds(), both of which have been removed.
This function is not allowed to fail; anything that might fail
must have been done in (c.ii).
(ii) Calls commit_creds() to apply the new credentials in a single
assignment (more or less). Possibly pdeath_signal and dumpable
should be part of struct creds.
(iii) Unlocks the task's cred_replace_mutex, thus allowing
PTRACE_ATTACH to take place.
(iv) Clears The bprm->cred pointer as the credentials it was holding
are now immutable.
(v) Calls security_bprm_committed_creds() to apply any security
alterations that must be done after the creds have been changed.
SELinux uses this to flush signals and signal handlers.
(f) If an error occurs before (d.i), bprm_free() will call abort_creds()
to destroy the proposed new credentials and will then unlock
cred_replace_mutex. No changes to the credentials will have been
made.
(2) LSM interface.
A number of functions have been changed, added or removed:
(*) security_bprm_alloc(), ->bprm_alloc_security()
(*) security_bprm_free(), ->bprm_free_security()
Removed in favour of preparing new credentials and modifying those.
(*) security_bprm_apply_creds(), ->bprm_apply_creds()
(*) security_bprm_post_apply_creds(), ->bprm_post_apply_creds()
Removed; split between security_bprm_set_creds(),
security_bprm_committing_creds() and security_bprm_committed_creds().
(*) security_bprm_set(), ->bprm_set_security()
Removed; folded into security_bprm_set_creds().
(*) security_bprm_set_creds(), ->bprm_set_creds()
New. The new credentials in bprm->creds should be checked and set up
as appropriate. bprm->cred_prepared is 0 on the first call, 1 on the
second and subsequent calls.
(*) security_bprm_committing_creds(), ->bprm_committing_creds()
(*) security_bprm_committed_creds(), ->bprm_committed_creds()
New. Apply the security effects of the new credentials. This
includes closing unauthorised files in SELinux. This function may not
fail. When the former is called, the creds haven't yet been applied
to the process; when the latter is called, they have.
The former may access bprm->cred, the latter may not.
(3) SELinux.
SELinux has a number of changes, in addition to those to support the LSM
interface changes mentioned above:
(a) The bprm_security_struct struct has been removed in favour of using
the credentials-under-construction approach.
(c) flush_unauthorized_files() now takes a cred pointer and passes it on
to inode_has_perm(), file_has_perm() and dentry_open().
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: James Morris <jmorris@namei.org>
Acked-by: Serge Hallyn <serue@us.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
2008-11-13 23:39:24 +00:00
|
|
|
struct linux_binprm;
|
2009-03-29 23:00:13 +00:00
|
|
|
struct path;
|
2011-11-24 23:22:03 +00:00
|
|
|
struct mount;
|
list_lru: introduce list_lru_shrink_{count,walk}
Kmem accounting of memcg is unusable now, because it lacks slab shrinker
support. That means when we hit the limit we will get ENOMEM w/o any
chance to recover. What we should do then is to call shrink_slab, which
would reclaim old inode/dentry caches from this cgroup. This is what
this patch set is intended to do.
Basically, it does two things. First, it introduces the notion of
per-memcg slab shrinker. A shrinker that wants to reclaim objects per
cgroup should mark itself as SHRINKER_MEMCG_AWARE. Then it will be
passed the memory cgroup to scan from in shrink_control->memcg. For
such shrinkers shrink_slab iterates over the whole cgroup subtree under
the target cgroup and calls the shrinker for each kmem-active memory
cgroup.
Secondly, this patch set makes the list_lru structure per-memcg. It's
done transparently to list_lru users - everything they have to do is to
tell list_lru_init that they want memcg-aware list_lru. Then the
list_lru will automatically distribute objects among per-memcg lists
basing on which cgroup the object is accounted to. This way to make FS
shrinkers (icache, dcache) memcg-aware we only need to make them use
memcg-aware list_lru, and this is what this patch set does.
As before, this patch set only enables per-memcg kmem reclaim when the
pressure goes from memory.limit, not from memory.kmem.limit. Handling
memory.kmem.limit is going to be tricky due to GFP_NOFS allocations, and
it is still unclear whether we will have this knob in the unified
hierarchy.
This patch (of 9):
NUMA aware slab shrinkers use the list_lru structure to distribute
objects coming from different NUMA nodes to different lists. Whenever
such a shrinker needs to count or scan objects from a particular node,
it issues commands like this:
count = list_lru_count_node(lru, sc->nid);
freed = list_lru_walk_node(lru, sc->nid, isolate_func,
isolate_arg, &sc->nr_to_scan);
where sc is an instance of the shrink_control structure passed to it
from vmscan.
To simplify this, let's add special list_lru functions to be used by
shrinkers, list_lru_shrink_count() and list_lru_shrink_walk(), which
consolidate the nid and nr_to_scan arguments in the shrink_control
structure.
This will also allow us to avoid patching shrinkers that use list_lru
when we make shrink_slab() per-memcg - all we will have to do is extend
the shrink_control structure to include the target memcg and make
list_lru_shrink_{count,walk} handle this appropriately.
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Suggested-by: Dave Chinner <david@fromorbit.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Greg Thelen <gthelen@google.com>
Cc: Glauber Costa <glommer@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-12 22:58:47 +00:00
|
|
|
struct shrink_control;
|
2018-11-04 08:19:03 +00:00
|
|
|
struct fs_context;
|
2021-01-26 03:24:28 +00:00
|
|
|
struct pipe_inode_info;
|
2022-09-26 15:59:14 +00:00
|
|
|
struct iov_iter;
|
2023-01-13 11:49:33 +00:00
|
|
|
struct mnt_idmap;
|
2024-06-27 14:11:41 +00:00
|
|
|
struct ns_common;
|
2006-08-31 10:55:23 +00:00
|
|
|
|
2006-09-30 18:52:18 +00:00
|
|
|
/*
|
2021-09-07 14:13:03 +00:00
|
|
|
* block/bdev.c
|
2006-09-30 18:52:18 +00:00
|
|
|
*/
|
[PATCH] BLOCK: Make it possible to disable the block layer [try #6]
Make it possible to disable the block layer. Not all embedded devices require
it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
the block layer to be present.
This patch does the following:
(*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
support.
(*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
an item that uses the block layer. This includes:
(*) Block I/O tracing.
(*) Disk partition code.
(*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.
(*) The SCSI layer. As far as I can tell, even SCSI chardevs use the
block layer to do scheduling. Some drivers that use SCSI facilities -
such as USB storage - end up disabled indirectly from this.
(*) Various block-based device drivers, such as IDE and the old CDROM
drivers.
(*) MTD blockdev handling and FTL.
(*) JFFS - which uses set_bdev_super(), something it could avoid doing by
taking a leaf out of JFFS2's book.
(*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
linux/elevator.h contingent on CONFIG_BLOCK being set. sector_div() is,
however, still used in places, and so is still available.
(*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
parts of linux/fs.h.
(*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.
(*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.
(*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
is not enabled.
(*) fs/no-block.c is created to hold out-of-line stubs and things that are
required when CONFIG_BLOCK is not set:
(*) Default blockdev file operations (to give error ENODEV on opening).
(*) Makes some /proc changes:
(*) /proc/devices does not list any blockdevs.
(*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.
(*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.
(*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
given command other than Q_SYNC or if a special device is specified.
(*) In init/do_mounts.c, no reference is made to the blockdev routines if
CONFIG_BLOCK is not defined. This does not prohibit NFS roots or JFFS2.
(*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
error ENOSYS by way of cond_syscall if so).
(*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
CONFIG_BLOCK is not set, since they can't then happen.
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2006-09-30 18:45:40 +00:00
|
|
|
#ifdef CONFIG_BLOCK
|
2006-09-30 18:52:18 +00:00
|
|
|
extern void __init bdev_cache_init(void);
|
[PATCH] BLOCK: Make it possible to disable the block layer [try #6]
Make it possible to disable the block layer. Not all embedded devices require
it, some can make do with just JFFS2, NFS, ramfs, etc - none of which require
the block layer to be present.
This patch does the following:
(*) Introduces CONFIG_BLOCK to disable the block layer, buffering and blockdev
support.
(*) Adds dependencies on CONFIG_BLOCK to any configuration item that controls
an item that uses the block layer. This includes:
(*) Block I/O tracing.
(*) Disk partition code.
(*) All filesystems that are block based, eg: Ext3, ReiserFS, ISOFS.
(*) The SCSI layer. As far as I can tell, even SCSI chardevs use the
block layer to do scheduling. Some drivers that use SCSI facilities -
such as USB storage - end up disabled indirectly from this.
(*) Various block-based device drivers, such as IDE and the old CDROM
drivers.
(*) MTD blockdev handling and FTL.
(*) JFFS - which uses set_bdev_super(), something it could avoid doing by
taking a leaf out of JFFS2's book.
(*) Makes most of the contents of linux/blkdev.h, linux/buffer_head.h and
linux/elevator.h contingent on CONFIG_BLOCK being set. sector_div() is,
however, still used in places, and so is still available.
(*) Also made contingent are the contents of linux/mpage.h, linux/genhd.h and
parts of linux/fs.h.
(*) Makes a number of files in fs/ contingent on CONFIG_BLOCK.
(*) Makes mm/bounce.c (bounce buffering) contingent on CONFIG_BLOCK.
(*) set_page_dirty() doesn't call __set_page_dirty_buffers() if CONFIG_BLOCK
is not enabled.
(*) fs/no-block.c is created to hold out-of-line stubs and things that are
required when CONFIG_BLOCK is not set:
(*) Default blockdev file operations (to give error ENODEV on opening).
(*) Makes some /proc changes:
(*) /proc/devices does not list any blockdevs.
(*) /proc/diskstats and /proc/partitions are contingent on CONFIG_BLOCK.
(*) Makes some compat ioctl handling contingent on CONFIG_BLOCK.
(*) If CONFIG_BLOCK is not defined, makes sys_quotactl() return -ENODEV if
given command other than Q_SYNC or if a special device is specified.
(*) In init/do_mounts.c, no reference is made to the blockdev routines if
CONFIG_BLOCK is not defined. This does not prohibit NFS roots or JFFS2.
(*) The bdflush, ioprio_set and ioprio_get syscalls can now be absent (return
error ENOSYS by way of cond_syscall if so).
(*) The seclvl_bd_claim() and seclvl_bd_release() security calls do nothing if
CONFIG_BLOCK is not set, since they can't then happen.
Signed-Off-By: David Howells <dhowells@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2006-09-30 18:45:40 +00:00
|
|
|
#else
|
2006-08-31 10:55:23 +00:00
|
|
|
static inline void bdev_cache_init(void)
|
|
|
|
{
|
|
|
|
}
|
2020-06-20 07:16:41 +00:00
|
|
|
#endif /* CONFIG_BLOCK */
|
2006-08-29 18:06:07 +00:00
|
|
|
|
2014-10-09 22:26:55 +00:00
|
|
|
/*
|
|
|
|
* buffer.c
|
|
|
|
*/
|
2021-11-03 18:05:47 +00:00
|
|
|
int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len,
|
2021-08-11 01:33:05 +00:00
|
|
|
get_block_t *get_block, const struct iomap *iomap);
|
2014-10-09 22:26:55 +00:00
|
|
|
|
2006-09-30 18:52:18 +00:00
|
|
|
/*
|
|
|
|
* char_dev.c
|
|
|
|
*/
|
|
|
|
extern void __init chrdev_init(void);
|
|
|
|
|
2018-11-04 08:19:03 +00:00
|
|
|
/*
|
|
|
|
* fs_context.c
|
|
|
|
*/
|
vfs: syscall: Add fsconfig() for configuring and managing a context
Add a syscall for configuring a filesystem creation context and triggering
actions upon it, to be used in conjunction with fsopen, fspick and fsmount.
long fsconfig(int fs_fd, unsigned int cmd, const char *key,
const void *value, int aux);
Where fs_fd indicates the context, cmd indicates the action to take, key
indicates the parameter name for parameter-setting actions and, if needed,
value points to a buffer containing the value and aux can give more
information for the value.
The following command IDs are proposed:
(*) FSCONFIG_SET_FLAG: No value is specified. The parameter must be
boolean in nature. The key may be prefixed with "no" to invert the
setting. value must be NULL and aux must be 0.
(*) FSCONFIG_SET_STRING: A string value is specified. The parameter can
be expecting boolean, integer, string or take a path. A conversion to
an appropriate type will be attempted (which may include looking up as
a path). value points to a NUL-terminated string and aux must be 0.
(*) FSCONFIG_SET_BINARY: A binary blob is specified. value points to
the blob and aux indicates its size. The parameter must be expecting
a blob.
(*) FSCONFIG_SET_PATH: A non-empty path is specified. The parameter must
be expecting a path object. value points to a NUL-terminated string
that is the path and aux is a file descriptor at which to start a
relative lookup or AT_FDCWD.
(*) FSCONFIG_SET_PATH_EMPTY: As fsconfig_set_path, but with AT_EMPTY_PATH
implied.
(*) FSCONFIG_SET_FD: An open file descriptor is specified. value must
be NULL and aux indicates the file descriptor.
(*) FSCONFIG_CMD_CREATE: Trigger superblock creation.
(*) FSCONFIG_CMD_RECONFIGURE: Trigger superblock reconfiguration.
For the "set" command IDs, the idea is that the file_system_type will point
to a list of parameters and the types of value that those parameters expect
to take. The core code can then do the parse and argument conversion and
then give the LSM and FS a cooked option or array of options to use.
Source specification is also done the same way same way, using special keys
"source", "source1", "source2", etc..
[!] Note that, for the moment, the key and value are just glued back
together and handed to the filesystem. Every filesystem that uses options
uses match_token() and co. to do this, and this will need to be changed -
but not all at once.
Example usage:
fd = fsopen("ext4", FSOPEN_CLOEXEC);
fsconfig(fd, fsconfig_set_path, "source", "/dev/sda1", AT_FDCWD);
fsconfig(fd, fsconfig_set_path_empty, "journal_path", "", journal_fd);
fsconfig(fd, fsconfig_set_fd, "journal_fd", "", journal_fd);
fsconfig(fd, fsconfig_set_flag, "user_xattr", NULL, 0);
fsconfig(fd, fsconfig_set_flag, "noacl", NULL, 0);
fsconfig(fd, fsconfig_set_string, "sb", "1", 0);
fsconfig(fd, fsconfig_set_string, "errors", "continue", 0);
fsconfig(fd, fsconfig_set_string, "data", "journal", 0);
fsconfig(fd, fsconfig_set_string, "context", "unconfined_u:...", 0);
fsconfig(fd, fsconfig_cmd_create, NULL, NULL, 0);
mfd = fsmount(fd, FSMOUNT_CLOEXEC, MS_NOEXEC);
or:
fd = fsopen("ext4", FSOPEN_CLOEXEC);
fsconfig(fd, fsconfig_set_string, "source", "/dev/sda1", 0);
fsconfig(fd, fsconfig_cmd_create, NULL, NULL, 0);
mfd = fsmount(fd, FSMOUNT_CLOEXEC, MS_NOEXEC);
or:
fd = fsopen("afs", FSOPEN_CLOEXEC);
fsconfig(fd, fsconfig_set_string, "source", "#grand.central.org:root.cell", 0);
fsconfig(fd, fsconfig_cmd_create, NULL, NULL, 0);
mfd = fsmount(fd, FSMOUNT_CLOEXEC, MS_NOEXEC);
or:
fd = fsopen("jffs2", FSOPEN_CLOEXEC);
fsconfig(fd, fsconfig_set_string, "source", "mtd0", 0);
fsconfig(fd, fsconfig_cmd_create, NULL, NULL, 0);
mfd = fsmount(fd, FSMOUNT_CLOEXEC, MS_NOEXEC);
Signed-off-by: David Howells <dhowells@redhat.com>
cc: linux-api@vger.kernel.org
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-01 23:36:09 +00:00
|
|
|
extern const struct fs_context_operations legacy_fs_context_ops;
|
2018-11-04 08:19:03 +00:00
|
|
|
extern int parse_monolithic_mount_data(struct fs_context *, void *);
|
vfs: syscall: Add fsconfig() for configuring and managing a context
Add a syscall for configuring a filesystem creation context and triggering
actions upon it, to be used in conjunction with fsopen, fspick and fsmount.
long fsconfig(int fs_fd, unsigned int cmd, const char *key,
const void *value, int aux);
Where fs_fd indicates the context, cmd indicates the action to take, key
indicates the parameter name for parameter-setting actions and, if needed,
value points to a buffer containing the value and aux can give more
information for the value.
The following command IDs are proposed:
(*) FSCONFIG_SET_FLAG: No value is specified. The parameter must be
boolean in nature. The key may be prefixed with "no" to invert the
setting. value must be NULL and aux must be 0.
(*) FSCONFIG_SET_STRING: A string value is specified. The parameter can
be expecting boolean, integer, string or take a path. A conversion to
an appropriate type will be attempted (which may include looking up as
a path). value points to a NUL-terminated string and aux must be 0.
(*) FSCONFIG_SET_BINARY: A binary blob is specified. value points to
the blob and aux indicates its size. The parameter must be expecting
a blob.
(*) FSCONFIG_SET_PATH: A non-empty path is specified. The parameter must
be expecting a path object. value points to a NUL-terminated string
that is the path and aux is a file descriptor at which to start a
relative lookup or AT_FDCWD.
(*) FSCONFIG_SET_PATH_EMPTY: As fsconfig_set_path, but with AT_EMPTY_PATH
implied.
(*) FSCONFIG_SET_FD: An open file descriptor is specified. value must
be NULL and aux indicates the file descriptor.
(*) FSCONFIG_CMD_CREATE: Trigger superblock creation.
(*) FSCONFIG_CMD_RECONFIGURE: Trigger superblock reconfiguration.
For the "set" command IDs, the idea is that the file_system_type will point
to a list of parameters and the types of value that those parameters expect
to take. The core code can then do the parse and argument conversion and
then give the LSM and FS a cooked option or array of options to use.
Source specification is also done the same way same way, using special keys
"source", "source1", "source2", etc..
[!] Note that, for the moment, the key and value are just glued back
together and handed to the filesystem. Every filesystem that uses options
uses match_token() and co. to do this, and this will need to be changed -
but not all at once.
Example usage:
fd = fsopen("ext4", FSOPEN_CLOEXEC);
fsconfig(fd, fsconfig_set_path, "source", "/dev/sda1", AT_FDCWD);
fsconfig(fd, fsconfig_set_path_empty, "journal_path", "", journal_fd);
fsconfig(fd, fsconfig_set_fd, "journal_fd", "", journal_fd);
fsconfig(fd, fsconfig_set_flag, "user_xattr", NULL, 0);
fsconfig(fd, fsconfig_set_flag, "noacl", NULL, 0);
fsconfig(fd, fsconfig_set_string, "sb", "1", 0);
fsconfig(fd, fsconfig_set_string, "errors", "continue", 0);
fsconfig(fd, fsconfig_set_string, "data", "journal", 0);
fsconfig(fd, fsconfig_set_string, "context", "unconfined_u:...", 0);
fsconfig(fd, fsconfig_cmd_create, NULL, NULL, 0);
mfd = fsmount(fd, FSMOUNT_CLOEXEC, MS_NOEXEC);
or:
fd = fsopen("ext4", FSOPEN_CLOEXEC);
fsconfig(fd, fsconfig_set_string, "source", "/dev/sda1", 0);
fsconfig(fd, fsconfig_cmd_create, NULL, NULL, 0);
mfd = fsmount(fd, FSMOUNT_CLOEXEC, MS_NOEXEC);
or:
fd = fsopen("afs", FSOPEN_CLOEXEC);
fsconfig(fd, fsconfig_set_string, "source", "#grand.central.org:root.cell", 0);
fsconfig(fd, fsconfig_cmd_create, NULL, NULL, 0);
mfd = fsmount(fd, FSMOUNT_CLOEXEC, MS_NOEXEC);
or:
fd = fsopen("jffs2", FSOPEN_CLOEXEC);
fsconfig(fd, fsconfig_set_string, "source", "mtd0", 0);
fsconfig(fd, fsconfig_cmd_create, NULL, NULL, 0);
mfd = fsmount(fd, FSMOUNT_CLOEXEC, MS_NOEXEC);
Signed-off-by: David Howells <dhowells@redhat.com>
cc: linux-api@vger.kernel.org
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-01 23:36:09 +00:00
|
|
|
extern void vfs_clean_context(struct fs_context *fc);
|
|
|
|
extern int finish_clean_context(struct fs_context *fc);
|
2018-11-04 08:19:03 +00:00
|
|
|
|
2012-06-25 11:55:46 +00:00
|
|
|
/*
|
|
|
|
* namei.c
|
|
|
|
*/
|
vfs: Add configuration parser helpers
Because the new API passes in key,value parameters, match_token() cannot be
used with it. Instead, provide three new helpers to aid with parsing:
(1) fs_parse(). This takes a parameter and a simple static description of
all the parameters and maps the key name to an ID. It returns 1 on a
match, 0 on no match if unknowns should be ignored and some other
negative error code on a parse error.
The parameter description includes a list of key names to IDs, desired
parameter types and a list of enumeration name -> ID mappings.
[!] Note that for the moment I've required that the key->ID mapping
array is expected to be sorted and unterminated. The size of the
array is noted in the fsconfig_parser struct. This allows me to use
bsearch(), but I'm not sure any performance gain is worth the hassle
of requiring people to keep the array sorted.
The parameter type array is sized according to the number of parameter
IDs and is indexed directly. The optional enum mapping array is an
unterminated, unsorted list and the size goes into the fsconfig_parser
struct.
The function can do some additional things:
(a) If it's not ambiguous and no value is given, the prefix "no" on
a key name is permitted to indicate that the parameter should
be considered negatory.
(b) If the desired type is a single simple integer, it will perform
an appropriate conversion and store the result in a union in
the parse result.
(c) If the desired type is an enumeration, {key ID, name} will be
looked up in the enumeration list and the matching value will
be stored in the parse result union.
(d) Optionally generate an error if the key is unrecognised.
This is called something like:
enum rdt_param {
Opt_cdp,
Opt_cdpl2,
Opt_mba_mpbs,
nr__rdt_params
};
const struct fs_parameter_spec rdt_param_specs[nr__rdt_params] = {
[Opt_cdp] = { fs_param_is_bool },
[Opt_cdpl2] = { fs_param_is_bool },
[Opt_mba_mpbs] = { fs_param_is_bool },
};
const const char *const rdt_param_keys[nr__rdt_params] = {
[Opt_cdp] = "cdp",
[Opt_cdpl2] = "cdpl2",
[Opt_mba_mpbs] = "mba_mbps",
};
const struct fs_parameter_description rdt_parser = {
.name = "rdt",
.nr_params = nr__rdt_params,
.keys = rdt_param_keys,
.specs = rdt_param_specs,
.no_source = true,
};
int rdt_parse_param(struct fs_context *fc,
struct fs_parameter *param)
{
struct fs_parse_result parse;
struct rdt_fs_context *ctx = rdt_fc2context(fc);
int ret;
ret = fs_parse(fc, &rdt_parser, param, &parse);
if (ret < 0)
return ret;
switch (parse.key) {
case Opt_cdp:
ctx->enable_cdpl3 = true;
return 0;
case Opt_cdpl2:
ctx->enable_cdpl2 = true;
return 0;
case Opt_mba_mpbs:
ctx->enable_mba_mbps = true;
return 0;
}
return -EINVAL;
}
(2) fs_lookup_param(). This takes a { dirfd, path, LOOKUP_EMPTY? } or
string value and performs an appropriate path lookup to convert it
into a path object, which it will then return.
If the desired type was a blockdev, the type of the looked up inode
will be checked to make sure it is one.
This can be used like:
enum foo_param {
Opt_source,
nr__foo_params
};
const struct fs_parameter_spec foo_param_specs[nr__foo_params] = {
[Opt_source] = { fs_param_is_blockdev },
};
const char *char foo_param_keys[nr__foo_params] = {
[Opt_source] = "source",
};
const struct constant_table foo_param_alt_keys[] = {
{ "device", Opt_source },
};
const struct fs_parameter_description foo_parser = {
.name = "foo",
.nr_params = nr__foo_params,
.nr_alt_keys = ARRAY_SIZE(foo_param_alt_keys),
.keys = foo_param_keys,
.alt_keys = foo_param_alt_keys,
.specs = foo_param_specs,
};
int foo_parse_param(struct fs_context *fc,
struct fs_parameter *param)
{
struct fs_parse_result parse;
struct foo_fs_context *ctx = foo_fc2context(fc);
int ret;
ret = fs_parse(fc, &foo_parser, param, &parse);
if (ret < 0)
return ret;
switch (parse.key) {
case Opt_source:
return fs_lookup_param(fc, &foo_parser, param,
&parse, &ctx->source);
default:
return -EINVAL;
}
}
(3) lookup_constant(). This takes a table of named constants and looks up
the given name within it. The table is expected to be sorted such
that bsearch() be used upon it.
Possibly I should require the table be terminated and just use a
for-loop to scan it instead of using bsearch() to reduce hassle.
Tables look something like:
static const struct constant_table bool_names[] = {
{ "0", false },
{ "1", true },
{ "false", false },
{ "no", false },
{ "true", true },
{ "yes", true },
};
and a lookup is done with something like:
b = lookup_constant(bool_names, param->string, -1);
Additionally, optional validation routines for the parameter description
are provided that can be enabled at compile time. A later patch will
invoke these when a filesystem is registered.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2018-11-01 23:07:24 +00:00
|
|
|
extern int filename_lookup(int dfd, struct filename *name, unsigned flags,
|
|
|
|
struct path *path, struct path *root);
|
2021-07-08 06:34:44 +00:00
|
|
|
int do_rmdir(int dfd, struct filename *name);
|
|
|
|
int do_unlinkat(int dfd, struct filename *name);
|
2023-01-13 11:49:22 +00:00
|
|
|
int may_linkat(struct mnt_idmap *idmap, const struct path *link);
|
2020-09-26 23:20:17 +00:00
|
|
|
int do_renameat2(int olddfd, struct filename *oldname, int newdfd,
|
|
|
|
struct filename *newname, unsigned int flags);
|
2021-07-08 06:34:44 +00:00
|
|
|
int do_mkdirat(int dfd, struct filename *name, umode_t mode);
|
2021-07-08 06:34:46 +00:00
|
|
|
int do_symlinkat(struct filename *from, int newdfd, struct filename *to);
|
2021-07-08 06:34:47 +00:00
|
|
|
int do_linkat(int olddfd, struct filename *old, int newdfd,
|
|
|
|
struct filename *new, int flags);
|
2024-05-02 18:35:57 +00:00
|
|
|
int vfs_tmpfile(struct mnt_idmap *idmap,
|
|
|
|
const struct path *parentpath,
|
|
|
|
struct file *file, umode_t mode);
|
2012-06-25 11:55:46 +00:00
|
|
|
|
2006-09-30 18:52:18 +00:00
|
|
|
/*
|
|
|
|
* namespace.c
|
|
|
|
*/
|
2016-11-21 00:45:28 +00:00
|
|
|
extern struct vfsmount *lookup_mnt(const struct path *);
|
2021-06-19 00:27:57 +00:00
|
|
|
extern int finish_automount(struct vfsmount *, const struct path *);
|
2008-03-22 19:48:17 +00:00
|
|
|
|
2011-11-21 11:11:31 +00:00
|
|
|
extern int sb_prepare_remount_readonly(struct super_block *);
|
2011-01-15 03:30:21 +00:00
|
|
|
|
2008-03-22 19:48:17 +00:00
|
|
|
extern void __init mnt_init(void);
|
2009-03-29 23:00:13 +00:00
|
|
|
|
2023-09-08 13:28:59 +00:00
|
|
|
int mnt_get_write_access_file(struct file *file);
|
|
|
|
void mnt_put_write_access_file(struct file *file);
|
2010-02-05 07:01:14 +00:00
|
|
|
|
2018-11-05 17:40:30 +00:00
|
|
|
extern void dissolve_on_fput(struct vfsmount *);
|
2022-03-01 05:05:29 +00:00
|
|
|
extern bool may_mount(void);
|
2020-07-21 09:12:08 +00:00
|
|
|
|
|
|
|
int path_mount(const char *dev_name, struct path *path,
|
|
|
|
const char *type_page, unsigned long flags, void *data_page);
|
2020-07-23 06:23:08 +00:00
|
|
|
int path_umount(struct path *path, int flags);
|
2020-07-21 09:12:08 +00:00
|
|
|
|
2023-10-25 14:02:01 +00:00
|
|
|
int show_path(struct seq_file *m, struct dentry *root);
|
|
|
|
|
2009-03-29 23:00:13 +00:00
|
|
|
/*
|
|
|
|
* fs_struct.c
|
|
|
|
*/
|
2013-03-02 04:51:07 +00:00
|
|
|
extern void chroot_fs_refs(const struct path *, const struct path *);
|
2009-04-26 10:25:56 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* file_table.c
|
|
|
|
*/
|
2023-06-15 11:22:28 +00:00
|
|
|
struct file *alloc_empty_file(int flags, const struct cred *cred);
|
|
|
|
struct file *alloc_empty_file_noaccount(int flags, const struct cred *cred);
|
|
|
|
struct file *alloc_empty_backing_file(int flags, const struct cred *cred);
|
2009-05-07 07:12:29 +00:00
|
|
|
|
2023-10-09 15:37:10 +00:00
|
|
|
static inline void file_put_write_access(struct file *file)
|
|
|
|
{
|
|
|
|
put_write_access(file->f_inode);
|
|
|
|
mnt_put_write_access(file->f_path.mnt);
|
|
|
|
if (unlikely(file->f_mode & FMODE_BACKING))
|
2023-10-09 15:37:12 +00:00
|
|
|
mnt_put_write_access(backing_file_user_path(file)->mnt);
|
2023-10-09 15:37:10 +00:00
|
|
|
}
|
|
|
|
|
2022-08-16 14:53:17 +00:00
|
|
|
static inline void put_file_access(struct file *file)
|
|
|
|
{
|
|
|
|
if ((file->f_mode & (FMODE_READ | FMODE_WRITE)) == FMODE_READ) {
|
|
|
|
i_readcount_dec(file->f_inode);
|
|
|
|
} else if (file->f_mode & FMODE_WRITER) {
|
2023-10-09 15:37:10 +00:00
|
|
|
file_put_write_access(file);
|
2022-08-16 14:53:17 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-05-07 07:12:29 +00:00
|
|
|
/*
|
|
|
|
* super.c
|
|
|
|
*/
|
2018-11-04 14:28:36 +00:00
|
|
|
extern int reconfigure_super(struct fs_context *);
|
2023-08-18 14:00:49 +00:00
|
|
|
extern bool super_trylock_shared(struct super_block *sb);
|
2020-11-23 12:38:40 +00:00
|
|
|
struct super_block *user_get_super(dev_t, bool excl);
|
2020-11-16 14:21:18 +00:00
|
|
|
void put_super(struct super_block *sb);
|
2019-05-12 21:31:45 +00:00
|
|
|
extern bool mount_capable(struct fs_context *);
|
2023-01-25 06:58:38 +00:00
|
|
|
int sb_init_dio_done_wq(struct super_block *sb);
|
2009-12-19 15:10:39 +00:00
|
|
|
|
2023-06-20 11:28:32 +00:00
|
|
|
/*
|
|
|
|
* Prepare superblock for changing its read-only state (i.e., either remount
|
|
|
|
* read-write superblock read-only or vice versa). After this function returns
|
|
|
|
* mnt_is_readonly() will return true for any mount of the superblock if its
|
|
|
|
* caller is able to observe any changes done by the remount. This holds until
|
|
|
|
* sb_end_ro_state_change() is called.
|
|
|
|
*/
|
|
|
|
static inline void sb_start_ro_state_change(struct super_block *sb)
|
|
|
|
{
|
|
|
|
WRITE_ONCE(sb->s_readonly_remount, 1);
|
|
|
|
/*
|
|
|
|
* For RO->RW transition, the barrier pairs with the barrier in
|
|
|
|
* mnt_is_readonly() making sure if mnt_is_readonly() sees SB_RDONLY
|
|
|
|
* cleared, it will see s_readonly_remount set.
|
|
|
|
* For RW->RO transition, the barrier pairs with the barrier in
|
2023-09-08 13:28:59 +00:00
|
|
|
* mnt_get_write_access() before the mnt_is_readonly() check.
|
|
|
|
* The barrier makes sure if mnt_get_write_access() sees MNT_WRITE_HOLD
|
|
|
|
* already cleared, it will see s_readonly_remount set.
|
2023-06-20 11:28:32 +00:00
|
|
|
*/
|
|
|
|
smp_wmb();
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Ends section changing read-only state of the superblock. After this function
|
|
|
|
* returns if mnt_is_readonly() returns false, the caller will be able to
|
|
|
|
* observe all the changes remount did to the superblock.
|
|
|
|
*/
|
|
|
|
static inline void sb_end_ro_state_change(struct super_block *sb)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* This barrier provides release semantics that pairs with
|
|
|
|
* the smp_rmb() acquire semantics in mnt_is_readonly().
|
|
|
|
* This barrier pair ensure that when mnt_is_readonly() sees
|
|
|
|
* 0 for sb->s_readonly_remount, it will also see all the
|
|
|
|
* preceding flag changes that were made during the RO state
|
|
|
|
* change.
|
|
|
|
*/
|
|
|
|
smp_wmb();
|
|
|
|
WRITE_ONCE(sb->s_readonly_remount, 0);
|
|
|
|
}
|
|
|
|
|
2009-12-19 15:10:39 +00:00
|
|
|
/*
|
|
|
|
* open.c
|
|
|
|
*/
|
2011-02-23 22:44:09 +00:00
|
|
|
struct open_flags {
|
|
|
|
int open_flag;
|
2011-11-21 19:59:34 +00:00
|
|
|
umode_t mode;
|
2011-02-23 22:44:09 +00:00
|
|
|
int acc_mode;
|
|
|
|
int intent;
|
2013-06-11 04:23:01 +00:00
|
|
|
int lookup_flags;
|
2011-02-23 22:44:09 +00:00
|
|
|
};
|
2012-10-10 20:43:10 +00:00
|
|
|
extern struct file *do_filp_open(int dfd, struct filename *pathname,
|
2013-06-11 04:23:01 +00:00
|
|
|
const struct open_flags *op);
|
2021-04-01 23:00:57 +00:00
|
|
|
extern struct file *do_file_open_root(const struct path *,
|
2013-06-11 04:23:01 +00:00
|
|
|
const char *, const struct open_flags *);
|
2019-12-13 18:10:11 +00:00
|
|
|
extern struct open_how build_open_how(int flags, umode_t mode);
|
|
|
|
extern int build_open_flags(const struct open_how *how, struct open_flags *op);
|
2023-11-30 12:49:08 +00:00
|
|
|
struct file *file_close_fd_locked(struct files_struct *files, unsigned fd);
|
2010-10-24 15:13:10 +00:00
|
|
|
|
2024-02-02 12:17:23 +00:00
|
|
|
long do_ftruncate(struct file *file, loff_t length, int small);
|
2018-03-11 10:34:54 +00:00
|
|
|
long do_sys_ftruncate(unsigned int fd, loff_t length, int small);
|
2020-07-22 09:41:02 +00:00
|
|
|
int chmod_common(const struct path *path, umode_t mode);
|
fs: add do_fchownat(), ksys_fchown() helpers and ksys_{,l}chown() wrappers
Using the fs-interal do_fchownat() wrapper allows us to get rid of
fs-internal calls to the sys_fchownat() syscall.
Introducing the ksys_fchown() helper and the ksys_{,}chown() wrappers
allows us to avoid the in-kernel calls to the sys_{,l,f}chown() syscalls.
The ksys_ prefix denotes that these functions are meant as a drop-in
replacement for the syscalls. In particular, they use the same calling
convention as sys_{,l,f}chown().
This patch is part of a series which removes in-kernel calls to syscalls.
On this basis, the syscall entry path can be streamlined. For details, see
http://lkml.kernel.org/r/20180325162527.GA17492@light.dominikbrodowski.net
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
2018-03-11 10:34:55 +00:00
|
|
|
int do_fchownat(int dfd, const char __user *filename, uid_t user, gid_t group,
|
|
|
|
int flag);
|
2020-07-22 09:13:26 +00:00
|
|
|
int chown_common(const struct path *path, uid_t user, gid_t group);
|
2018-07-10 17:22:28 +00:00
|
|
|
extern int vfs_open(const struct path *, struct file *);
|
2011-01-29 13:13:26 +00:00
|
|
|
|
2010-10-24 15:13:10 +00:00
|
|
|
/*
|
|
|
|
* inode.c
|
|
|
|
*/
|
list_lru: introduce list_lru_shrink_{count,walk}
Kmem accounting of memcg is unusable now, because it lacks slab shrinker
support. That means when we hit the limit we will get ENOMEM w/o any
chance to recover. What we should do then is to call shrink_slab, which
would reclaim old inode/dentry caches from this cgroup. This is what
this patch set is intended to do.
Basically, it does two things. First, it introduces the notion of
per-memcg slab shrinker. A shrinker that wants to reclaim objects per
cgroup should mark itself as SHRINKER_MEMCG_AWARE. Then it will be
passed the memory cgroup to scan from in shrink_control->memcg. For
such shrinkers shrink_slab iterates over the whole cgroup subtree under
the target cgroup and calls the shrinker for each kmem-active memory
cgroup.
Secondly, this patch set makes the list_lru structure per-memcg. It's
done transparently to list_lru users - everything they have to do is to
tell list_lru_init that they want memcg-aware list_lru. Then the
list_lru will automatically distribute objects among per-memcg lists
basing on which cgroup the object is accounted to. This way to make FS
shrinkers (icache, dcache) memcg-aware we only need to make them use
memcg-aware list_lru, and this is what this patch set does.
As before, this patch set only enables per-memcg kmem reclaim when the
pressure goes from memory.limit, not from memory.kmem.limit. Handling
memory.kmem.limit is going to be tricky due to GFP_NOFS allocations, and
it is still unclear whether we will have this knob in the unified
hierarchy.
This patch (of 9):
NUMA aware slab shrinkers use the list_lru structure to distribute
objects coming from different NUMA nodes to different lists. Whenever
such a shrinker needs to count or scan objects from a particular node,
it issues commands like this:
count = list_lru_count_node(lru, sc->nid);
freed = list_lru_walk_node(lru, sc->nid, isolate_func,
isolate_arg, &sc->nr_to_scan);
where sc is an instance of the shrink_control structure passed to it
from vmscan.
To simplify this, let's add special list_lru functions to be used by
shrinkers, list_lru_shrink_count() and list_lru_shrink_walk(), which
consolidate the nid and nr_to_scan arguments in the shrink_control
structure.
This will also allow us to avoid patching shrinkers that use list_lru
when we make shrink_slab() per-memcg - all we will have to do is extend
the shrink_control structure to include the target memcg and make
list_lru_shrink_{count,walk} handle this appropriately.
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Suggested-by: Dave Chinner <david@fromorbit.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Greg Thelen <gthelen@google.com>
Cc: Glauber Costa <glommer@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-12 22:58:47 +00:00
|
|
|
extern long prune_icache_sb(struct super_block *sb, struct shrink_control *sc);
|
2023-01-13 11:49:27 +00:00
|
|
|
int dentry_needs_remove_privs(struct mnt_idmap *, struct dentry *dentry);
|
|
|
|
bool in_group_or_capable(struct mnt_idmap *idmap,
|
2022-10-17 15:06:34 +00:00
|
|
|
const struct inode *inode, vfsgid_t vfsgid);
|
2011-03-22 11:23:40 +00:00
|
|
|
|
2011-03-22 11:23:41 +00:00
|
|
|
/*
|
|
|
|
* fs-writeback.c
|
|
|
|
*/
|
fs: bump inode and dentry counters to long
This series reworks our current object cache shrinking infrastructure in
two main ways:
* Noticing that a lot of users copy and paste their own version of LRU
lists for objects, we put some effort in providing a generic version.
It is modeled after the filesystem users: dentries, inodes, and xfs
(for various tasks), but we expect that other users could benefit in
the near future with little or no modification. Let us know if you
have any issues.
* The underlying list_lru being proposed automatically and
transparently keeps the elements in per-node lists, and is able to
manipulate the node lists individually. Given this infrastructure, we
are able to modify the up-to-now hammer called shrink_slab to proceed
with node-reclaim instead of always searching memory from all over like
it has been doing.
Per-node lru lists are also expected to lead to less contention in the lru
locks on multi-node scans, since we are now no longer fighting for a
global lock. The locks usually disappear from the profilers with this
change.
Although we have no official benchmarks for this version - be our guest to
independently evaluate this - earlier versions of this series were
performance tested (details at
http://permalink.gmane.org/gmane.linux.kernel.mm/100537) yielding no
visible performance regressions while yielding a better qualitative
behavior in NUMA machines.
With this infrastructure in place, we can use the list_lru entry point to
provide memcg isolation and per-memcg targeted reclaim. Historically,
those two pieces of work have been posted together. This version presents
only the infrastructure work, deferring the memcg work for a later time,
so we can focus on getting this part tested. You can see more about the
history of such work at http://lwn.net/Articles/552769/
Dave Chinner (18):
dcache: convert dentry_stat.nr_unused to per-cpu counters
dentry: move to per-sb LRU locks
dcache: remove dentries from LRU before putting on dispose list
mm: new shrinker API
shrinker: convert superblock shrinkers to new API
list: add a new LRU list type
inode: convert inode lru list to generic lru list code.
dcache: convert to use new lru list infrastructure
list_lru: per-node list infrastructure
shrinker: add node awareness
fs: convert inode and dentry shrinking to be node aware
xfs: convert buftarg LRU to generic code
xfs: rework buffer dispose list tracking
xfs: convert dquot cache lru to list_lru
fs: convert fs shrinkers to new scan/count API
drivers: convert shrinkers to new count/scan API
shrinker: convert remaining shrinkers to count/scan API
shrinker: Kill old ->shrink API.
Glauber Costa (7):
fs: bump inode and dentry counters to long
super: fix calculation of shrinkable objects for small numbers
list_lru: per-node API
vmscan: per-node deferred work
i915: bail out earlier when shrinker cannot acquire mutex
hugepage: convert huge zero page shrinker to new shrinker API
list_lru: dynamically adjust node arrays
This patch:
There are situations in very large machines in which we can have a large
quantity of dirty inodes, unused dentries, etc. This is particularly true
when umounting a filesystem, where eventually since every live object will
eventually be discarded.
Dave Chinner reported a problem with this while experimenting with the
shrinker revamp patchset. So we believe it is time for a change. This
patch just moves int to longs. Machines where it matters should have a
big long anyway.
Signed-off-by: Glauber Costa <glommer@openvz.org>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Carlos Maiolino <cmaiolino@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Gleb Natapov <gleb@redhat.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: J. Bruce Fields <bfields@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Kent Overstreet <koverstreet@google.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2013-08-28 00:17:53 +00:00
|
|
|
extern long get_nr_dirty_inodes(void);
|
2023-08-11 10:08:28 +00:00
|
|
|
void invalidate_inodes(struct super_block *sb);
|
2011-07-07 19:03:58 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* dcache.c
|
|
|
|
*/
|
2013-09-05 12:39:11 +00:00
|
|
|
extern int d_set_mounted(struct dentry *dentry);
|
list_lru: introduce list_lru_shrink_{count,walk}
Kmem accounting of memcg is unusable now, because it lacks slab shrinker
support. That means when we hit the limit we will get ENOMEM w/o any
chance to recover. What we should do then is to call shrink_slab, which
would reclaim old inode/dentry caches from this cgroup. This is what
this patch set is intended to do.
Basically, it does two things. First, it introduces the notion of
per-memcg slab shrinker. A shrinker that wants to reclaim objects per
cgroup should mark itself as SHRINKER_MEMCG_AWARE. Then it will be
passed the memory cgroup to scan from in shrink_control->memcg. For
such shrinkers shrink_slab iterates over the whole cgroup subtree under
the target cgroup and calls the shrinker for each kmem-active memory
cgroup.
Secondly, this patch set makes the list_lru structure per-memcg. It's
done transparently to list_lru users - everything they have to do is to
tell list_lru_init that they want memcg-aware list_lru. Then the
list_lru will automatically distribute objects among per-memcg lists
basing on which cgroup the object is accounted to. This way to make FS
shrinkers (icache, dcache) memcg-aware we only need to make them use
memcg-aware list_lru, and this is what this patch set does.
As before, this patch set only enables per-memcg kmem reclaim when the
pressure goes from memory.limit, not from memory.kmem.limit. Handling
memory.kmem.limit is going to be tricky due to GFP_NOFS allocations, and
it is still unclear whether we will have this knob in the unified
hierarchy.
This patch (of 9):
NUMA aware slab shrinkers use the list_lru structure to distribute
objects coming from different NUMA nodes to different lists. Whenever
such a shrinker needs to count or scan objects from a particular node,
it issues commands like this:
count = list_lru_count_node(lru, sc->nid);
freed = list_lru_walk_node(lru, sc->nid, isolate_func,
isolate_arg, &sc->nr_to_scan);
where sc is an instance of the shrink_control structure passed to it
from vmscan.
To simplify this, let's add special list_lru functions to be used by
shrinkers, list_lru_shrink_count() and list_lru_shrink_walk(), which
consolidate the nid and nr_to_scan arguments in the shrink_control
structure.
This will also allow us to avoid patching shrinkers that use list_lru
when we make shrink_slab() per-memcg - all we will have to do is extend
the shrink_control structure to include the target memcg and make
list_lru_shrink_{count,walk} handle this appropriately.
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Suggested-by: Dave Chinner <david@fromorbit.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Greg Thelen <gthelen@google.com>
Cc: Glauber Costa <glommer@gmail.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-02-12 22:58:47 +00:00
|
|
|
extern long prune_dcache_sb(struct super_block *sb, struct shrink_control *sc);
|
2016-06-10 15:32:47 +00:00
|
|
|
extern struct dentry *d_alloc_cursor(struct dentry *);
|
2019-03-16 02:58:11 +00:00
|
|
|
extern struct dentry * d_alloc_pseudo(struct super_block *, const struct qstr *);
|
2019-05-20 12:44:57 +00:00
|
|
|
extern char *simple_dname(struct dentry *, char *, int);
|
Teach shrink_dcache_parent() to cope with mixed-filesystem shrink lists
Currently, running into a shrink list that contains dentries from different
filesystems can cause several unpleasant things for shrink_dcache_parent()
and for umount(2).
The first problem is that there's a window during shrink_dentry_list() between
__dentry_kill() takes a victim out and dropping reference to its parent. During
that window the parent looks like a genuine busy dentry. shrink_dcache_parent()
(or, worse yet, shrink_dcache_for_umount()) coming at that time will see no
eviction candidates and no indication that it needs to wait for some
shrink_dentry_list() to proceed further.
That applies for any shrink list that might intersect with the subtree we are
trying to shrink; the only reason it does not blow on umount(2) in the mainline
is that we unregister the memory shrinker before hitting shrink_dcache_for_umount().
Another problem happens if something in a mixed-filesystem shrink list gets
be stuck in e.g. iput(), getting umount of unrelated fs to spin waiting for
the stuck shrinker to get around to our dentries.
Solution:
1) have shrink_dentry_list() decrement the parent's refcount and
make sure it's on a shrink list (ours unless it already had been on some
other) before calling __dentry_kill(). That eliminates the window when
shrink_dcache_parent() would've blown past the entire subtree without
noticing anything with zero refcount not on shrink lists.
2) when shrink_dcache_parent() has found no eviction candidates,
but some dentries are still sitting on shrink lists, rather than
repeating the scan in hope that shrinkers have progressed, scan looking
for something on shrink lists with zero refcount. If such a thing is
found, grab rcu_read_lock() and stop the scan, with caller locking
it for eviction, dropping out of RCU and doing __dentry_kill(), with
the same treatment for parent as shrink_dentry_list() would do.
Note that right now mixed-filesystem shrink lists do not occur, so this
is not a mainline bug. Howevere, there's a bunch of uses for such
beasts (e.g. the "try and evict everything we can out of given page"
patches; there are potential uses in mount-related code, considerably
simplifying the life in fs/namespace.c, etc.)
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2019-06-29 22:31:24 +00:00
|
|
|
extern void dput_to_list(struct dentry *, struct list_head *);
|
|
|
|
extern void shrink_dentry_list(struct list_head *);
|
2023-11-10 20:22:40 +00:00
|
|
|
extern void shrink_dcache_for_umount(struct super_block *);
|
|
|
|
extern struct dentry *__d_lookup(const struct dentry *, const struct qstr *);
|
|
|
|
extern struct dentry *__d_lookup_rcu(const struct dentry *parent,
|
|
|
|
const struct qstr *name, unsigned *seq);
|
2023-11-11 21:01:27 +00:00
|
|
|
extern void d_genocide(struct dentry *);
|
2013-03-20 17:19:30 +00:00
|
|
|
|
2013-03-12 13:58:10 +00:00
|
|
|
/*
|
|
|
|
* pipe.c
|
|
|
|
*/
|
|
|
|
extern const struct file_operations pipefifo_fops;
|
2014-05-21 22:22:52 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* fs_pin.c
|
|
|
|
*/
|
2015-01-11 15:57:27 +00:00
|
|
|
extern void group_pin_kill(struct hlist_head *p);
|
2014-05-21 22:22:52 +00:00
|
|
|
extern void mnt_pin_kill(struct mount *m);
|
take the targets of /proc/*/ns/* symlinks to separate fs
New pseudo-filesystem: nsfs. Targets of /proc/*/ns/* live there now.
It's not mountable (not even registered, so it's not in /proc/filesystems,
etc.). Files on it *are* bindable - we explicitly permit that in do_loopback().
This stuff lives in fs/nsfs.c now; proc_ns_fget() moved there as well.
get_proc_ns() is a macro now (it's simply returning ->i_private; would
have been an inline, if not for header ordering headache).
proc_ns_inode() is an ex-parrot. The interface used in procfs is
ns_get_path(path, task, ops) and ns_get_name(buf, size, task, ops).
Dentries and inodes are never hashed; a non-counting reference to dentry
is stashed in ns_common (removed by ->d_prune()) and reused by ns_get_path()
if present. See ns_get_path()/ns_prune_dentry/nsfs_evict() for details
of that mechanism.
As the result, proc_ns_follow_link() has stopped poking in nd->path.mnt;
it does nd_jump_link() on a consistent <vfsmount,dentry> pair it gets
from ns_get_path().
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2014-11-01 14:57:28 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* fs/nsfs.c
|
|
|
|
*/
|
2016-09-14 22:15:34 +00:00
|
|
|
extern const struct dentry_operations ns_dentry_operations;
|
2024-07-15 19:34:01 +00:00
|
|
|
int open_namespace(struct ns_common *ns);
|
2016-01-07 14:53:30 +00:00
|
|
|
|
2019-12-14 20:26:33 +00:00
|
|
|
/*
|
|
|
|
* fs/stat.c:
|
|
|
|
*/
|
2022-02-25 18:53:26 +00:00
|
|
|
|
|
|
|
int getname_statx_lookup_flags(int flags);
|
|
|
|
int do_statx(int dfd, struct filename *filename, unsigned int flags,
|
2020-05-23 04:31:17 +00:00
|
|
|
unsigned int mask, struct statx __user *buffer);
|
2024-06-25 15:18:06 +00:00
|
|
|
int do_statx_fd(int fd, unsigned int flags, unsigned int mask,
|
|
|
|
struct statx __user *buffer);
|
2021-01-26 03:24:28 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* fs/splice.c:
|
|
|
|
*/
|
2023-12-12 09:44:36 +00:00
|
|
|
ssize_t splice_file_to_pipe(struct file *in,
|
|
|
|
struct pipe_inode_info *opipe,
|
|
|
|
loff_t *offset,
|
|
|
|
size_t len, unsigned int flags);
|
2022-04-25 00:10:46 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* fs/xattr.c:
|
|
|
|
*/
|
|
|
|
struct xattr_name {
|
|
|
|
char name[XATTR_NAME_MAX + 1];
|
|
|
|
};
|
|
|
|
|
2024-04-26 16:20:15 +00:00
|
|
|
struct kernel_xattr_ctx {
|
2022-04-25 00:10:46 +00:00
|
|
|
/* Value of attribute */
|
|
|
|
union {
|
|
|
|
const void __user *cvalue;
|
|
|
|
void __user *value;
|
|
|
|
};
|
|
|
|
void *kvalue;
|
|
|
|
size_t size;
|
|
|
|
/* Attribute name */
|
|
|
|
struct xattr_name *kname;
|
|
|
|
unsigned int flags;
|
|
|
|
};
|
|
|
|
|
2022-04-25 00:13:50 +00:00
|
|
|
|
2022-10-28 07:56:20 +00:00
|
|
|
ssize_t do_getxattr(struct mnt_idmap *idmap,
|
2022-04-25 00:13:50 +00:00
|
|
|
struct dentry *d,
|
2024-04-26 16:20:15 +00:00
|
|
|
struct kernel_xattr_ctx *ctx);
|
2022-04-25 00:13:50 +00:00
|
|
|
|
2024-04-26 16:20:15 +00:00
|
|
|
int setxattr_copy(const char __user *name, struct kernel_xattr_ctx *ctx);
|
2022-10-28 07:56:20 +00:00
|
|
|
int do_setxattr(struct mnt_idmap *idmap, struct dentry *dentry,
|
2024-04-26 16:20:15 +00:00
|
|
|
struct kernel_xattr_ctx *ctx);
|
2023-01-13 11:49:22 +00:00
|
|
|
int may_write_xattr(struct mnt_idmap *idmap, struct inode *inode);
|
2022-09-26 15:59:14 +00:00
|
|
|
|
2022-09-22 15:17:22 +00:00
|
|
|
#ifdef CONFIG_FS_POSIX_ACL
|
2022-10-28 07:56:20 +00:00
|
|
|
int do_set_acl(struct mnt_idmap *idmap, struct dentry *dentry,
|
2022-09-22 15:17:22 +00:00
|
|
|
const char *acl_name, const void *kvalue, size_t size);
|
2022-10-28 07:56:20 +00:00
|
|
|
ssize_t do_get_acl(struct mnt_idmap *idmap, struct dentry *dentry,
|
2022-09-22 15:17:22 +00:00
|
|
|
const char *acl_name, void *kvalue, size_t size);
|
|
|
|
#else
|
2022-10-28 07:56:20 +00:00
|
|
|
static inline int do_set_acl(struct mnt_idmap *idmap,
|
2022-09-22 15:17:22 +00:00
|
|
|
struct dentry *dentry, const char *acl_name,
|
|
|
|
const void *kvalue, size_t size)
|
|
|
|
{
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
2022-10-28 07:56:20 +00:00
|
|
|
static inline ssize_t do_get_acl(struct mnt_idmap *idmap,
|
2022-09-22 15:17:22 +00:00
|
|
|
struct dentry *dentry, const char *acl_name,
|
|
|
|
void *kvalue, size_t size)
|
|
|
|
{
|
|
|
|
return -EOPNOTSUPP;
|
|
|
|
}
|
|
|
|
#endif
|
2022-09-26 15:59:14 +00:00
|
|
|
|
|
|
|
ssize_t __kernel_write_iter(struct file *file, struct iov_iter *from, loff_t *pos);
|
2022-10-17 15:06:36 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* fs/attr.c
|
|
|
|
*/
|
2023-01-13 11:49:33 +00:00
|
|
|
struct mnt_idmap *alloc_mnt_idmap(struct user_namespace *mnt_userns);
|
|
|
|
struct mnt_idmap *mnt_idmap_get(struct mnt_idmap *idmap);
|
|
|
|
void mnt_idmap_put(struct mnt_idmap *idmap);
|
2024-03-01 09:26:03 +00:00
|
|
|
struct stashed_operations {
|
|
|
|
void (*put_data)(void *data);
|
2024-03-12 09:39:44 +00:00
|
|
|
int (*init_inode)(struct inode *inode, void *data);
|
2024-03-01 09:26:03 +00:00
|
|
|
};
|
2024-03-12 09:39:44 +00:00
|
|
|
int path_from_stashed(struct dentry **stashed, struct vfsmount *mnt, void *data,
|
|
|
|
struct path *path);
|
2024-02-21 08:59:51 +00:00
|
|
|
void stashed_dentry_prune(struct dentry *dentry);
|
2024-06-25 15:18:06 +00:00
|
|
|
/**
|
|
|
|
* path_mounted - check whether path is mounted
|
|
|
|
* @path: path to check
|
|
|
|
*
|
|
|
|
* Determine whether @path refers to the root of a mount.
|
|
|
|
*
|
|
|
|
* Return: true if @path is the root of a mount, false if not.
|
|
|
|
*/
|
|
|
|
static inline bool path_mounted(const struct path *path)
|
|
|
|
{
|
|
|
|
return path->mnt->mnt_root == path->dentry;
|
|
|
|
}
|
2024-08-09 16:00:01 +00:00
|
|
|
void file_f_owner_release(struct file *file);
|