Landlock updates for v5.19-rc1

Important changes:
 * improve the path_rename LSM hook implementations for RENAME_EXCHANGE;
 * fix a too-restrictive filesystem control for a rare corner case;
 * set the nested sandbox limitation to 16 layers;
 * add a new LANDLOCK_ACCESS_FS_REFER access right to properly handle
   file reparenting (i.e. full rename and link support);
 * add new tests and documentation;
 * format code with clang-format to make it easier to maintain and
   contribute.
 
 Related patch series:
 * [PATCH v1 0/7] Landlock: Clean up coding style with clang-format
   https://lore.kernel.org/r/20220506160513.523257-1-mic@digikod.net
 * [PATCH v2 00/10] Minor Landlock fixes and new tests
   https://lore.kernel.org/r/20220506160820.524344-1-mic@digikod.net
 * [PATCH v3 00/12] Landlock: file linking and renaming support
   https://lore.kernel.org/r/20220506161102.525323-1-mic@digikod.net
 * [PATCH v2] landlock: Explain how to support Landlock
   https://lore.kernel.org/r/20220513112743.156414-1-mic@digikod.net
 -----BEGIN PGP SIGNATURE-----
 
 iIYEABYIAC4WIQSVyBthFV4iTW/VU1/l49DojIL20gUCYousmBAcbWljQGRpZ2lr
 b2QubmV0AAoJEOXj0OiMgvbSWToA/32m9xJhfppiTBHqw6Dt47v4sjuE/3ScwO/O
 40rzaqs3AQD8AWHeqvPuM2lwPp1NQS4mcfv7K3DSCGBbUjHqdcl3Aw==
 =+tJO
 -----END PGP SIGNATURE-----

Merge tag 'landlock-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mic/linux

Pull Landlock updates from Mickaël Salaün:

 - improve the path_rename LSM hook implementations for RENAME_EXCHANGE;

 - fix a too-restrictive filesystem control for a rare corner case;

 - set the nested sandbox limitation to 16 layers;

 - add a new LANDLOCK_ACCESS_FS_REFER access right to properly handle
   file reparenting (i.e. full rename and link support);

 - add new tests and documentation;

 - format code with clang-format to make it easier to maintain and
   contribute.

* tag 'landlock-5.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/mic/linux: (30 commits)
  landlock: Explain how to support Landlock
  landlock: Add design choices documentation for filesystem access rights
  landlock: Document good practices about filesystem policies
  landlock: Document LANDLOCK_ACCESS_FS_REFER and ABI versioning
  samples/landlock: Add support for file reparenting
  selftests/landlock: Add 11 new test suites dedicated to file reparenting
  landlock: Add support for file reparenting with LANDLOCK_ACCESS_FS_REFER
  LSM: Remove double path_rename hook calls for RENAME_EXCHANGE
  landlock: Move filesystem helpers and add a new one
  landlock: Fix same-layer rule unions
  landlock: Create find_rule() from unmask_layers()
  landlock: Reduce the maximum number of layers to 16
  landlock: Define access_mask_t to enforce a consistent access mask size
  selftests/landlock: Test landlock_create_ruleset(2) argument check ordering
  landlock: Change landlock_restrict_self(2) check ordering
  landlock: Change landlock_add_rule(2) argument check ordering
  selftests/landlock: Add tests for O_PATH
  selftests/landlock: Fully test file rename with "remove" access
  selftests/landlock: Extend access right tests to directories
  selftests/landlock: Add tests for unknown access rights
  ...
This commit is contained in:
Linus Torvalds 2022-05-24 13:09:13 -07:00
commit cb44e4f061
24 changed files with 2597 additions and 711 deletions

View File

@ -7,7 +7,7 @@ Landlock LSM: kernel documentation
==================================
:Author: Mickaël Salaün
:Date: March 2021
:Date: May 2022
Landlock's goal is to create scoped access-control (i.e. sandboxing). To
harden a whole system, this feature should be available to any process,
@ -42,6 +42,21 @@ Guiding principles for safe access controls
* Computation related to Landlock operations (e.g. enforcing a ruleset) shall
only impact the processes requesting them.
Design choices
==============
Filesystem access rights
------------------------
All access rights are tied to an inode and what can be accessed through it.
Reading the content of a directory doesn't imply to be allowed to read the
content of a listed inode. Indeed, a file name is local to its parent
directory, and an inode can be referenced by multiple file names thanks to
(hard) links. Being able to unlink a file only has a direct impact on the
directory, not the unlinked inode. This is the reason why
`LANDLOCK_ACCESS_FS_REMOVE_FILE` or `LANDLOCK_ACCESS_FS_REFER` are not allowed
to be tied to files but only to directories.
Tests
=====

View File

@ -1,14 +1,14 @@
.. SPDX-License-Identifier: GPL-2.0
.. Copyright © 2017-2020 Mickaël Salaün <mic@digikod.net>
.. Copyright © 2019-2020 ANSSI
.. Copyright © 2021 Microsoft Corporation
.. Copyright © 2021-2022 Microsoft Corporation
=====================================
Landlock: unprivileged access control
=====================================
:Author: Mickaël Salaün
:Date: March 2021
:Date: May 2022
The goal of Landlock is to enable to restrict ambient rights (e.g. global
filesystem access) for a set of processes. Because Landlock is a stackable
@ -18,6 +18,13 @@ is expected to help mitigate the security impact of bugs or
unexpected/malicious behaviors in user space applications. Landlock empowers
any process, including unprivileged ones, to securely restrict themselves.
We can quickly make sure that Landlock is enabled in the running system by
looking for "landlock: Up and running" in kernel logs (as root): ``dmesg | grep
landlock || journalctl -kg landlock`` . Developers can also easily check for
Landlock support with a :ref:`related system call <landlock_abi_versions>`. If
Landlock is not currently supported, we need to :ref:`configure the kernel
appropriately <kernel_support>`.
Landlock rules
==============
@ -29,14 +36,15 @@ the thread enforcing it, and its future children.
Defining and enforcing a security policy
----------------------------------------
We first need to create the ruleset that will contain our rules. For this
We first need to define the ruleset that will contain our rules. For this
example, the ruleset will contain rules that only allow read actions, but write
actions will be denied. The ruleset then needs to handle both of these kind of
actions.
actions. This is required for backward and forward compatibility (i.e. the
kernel and user space may not know each other's supported restrictions), hence
the need to be explicit about the denied-by-default access rights.
.. code-block:: c
int ruleset_fd;
struct landlock_ruleset_attr ruleset_attr = {
.handled_access_fs =
LANDLOCK_ACCESS_FS_EXECUTE |
@ -51,9 +59,34 @@ actions.
LANDLOCK_ACCESS_FS_MAKE_SOCK |
LANDLOCK_ACCESS_FS_MAKE_FIFO |
LANDLOCK_ACCESS_FS_MAKE_BLOCK |
LANDLOCK_ACCESS_FS_MAKE_SYM,
LANDLOCK_ACCESS_FS_MAKE_SYM |
LANDLOCK_ACCESS_FS_REFER,
};
Because we may not know on which kernel version an application will be
executed, it is safer to follow a best-effort security approach. Indeed, we
should try to protect users as much as possible whatever the kernel they are
using. To avoid binary enforcement (i.e. either all security features or
none), we can leverage a dedicated Landlock command to get the current version
of the Landlock ABI and adapt the handled accesses. Let's check if we should
remove the `LANDLOCK_ACCESS_FS_REFER` access right which is only supported
starting with the second version of the ABI.
.. code-block:: c
int abi;
abi = landlock_create_ruleset(NULL, 0, LANDLOCK_CREATE_RULESET_VERSION);
if (abi < 2) {
ruleset_attr.handled_access_fs &= ~LANDLOCK_ACCESS_FS_REFER;
}
This enables to create an inclusive ruleset that will contain our rules.
.. code-block:: c
int ruleset_fd;
ruleset_fd = landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
if (ruleset_fd < 0) {
perror("Failed to create a ruleset");
@ -92,6 +125,11 @@ descriptor.
return 1;
}
It may also be required to create rules following the same logic as explained
for the ruleset creation, by filtering access rights according to the Landlock
ABI version. In this example, this is not required because
`LANDLOCK_ACCESS_FS_REFER` is not allowed by any rule.
We now have a ruleset with one rule allowing read access to ``/usr`` while
denying all other handled accesses for the filesystem. The next step is to
restrict the current thread from gaining more privileges (e.g. thanks to a SUID
@ -125,6 +163,27 @@ ruleset.
Full working code can be found in `samples/landlock/sandboxer.c`_.
Good practices
--------------
It is recommended setting access rights to file hierarchy leaves as much as
possible. For instance, it is better to be able to have ``~/doc/`` as a
read-only hierarchy and ``~/tmp/`` as a read-write hierarchy, compared to
``~/`` as a read-only hierarchy and ``~/tmp/`` as a read-write hierarchy.
Following this good practice leads to self-sufficient hierarchies that don't
depend on their location (i.e. parent directories). This is particularly
relevant when we want to allow linking or renaming. Indeed, having consistent
access rights per directory enables to change the location of such directory
without relying on the destination directory access rights (except those that
are required for this operation, see `LANDLOCK_ACCESS_FS_REFER` documentation).
Having self-sufficient hierarchies also helps to tighten the required access
rights to the minimal set of data. This also helps avoid sinkhole directories,
i.e. directories where data can be linked to but not linked from. However,
this depends on data organization, which might not be controlled by developers.
In this case, granting read-write access to ``~/tmp/``, instead of write-only
access, would potentially allow to move ``~/tmp/`` to a non-readable directory
and still keep the ability to list the content of ``~/tmp/``.
Layers of file path access rights
---------------------------------
@ -192,6 +251,58 @@ To be allowed to use :manpage:`ptrace(2)` and related syscalls on a target
process, a sandboxed process should have a subset of the target process rules,
which means the tracee must be in a sub-domain of the tracer.
Compatibility
=============
Backward and forward compatibility
----------------------------------
Landlock is designed to be compatible with past and future versions of the
kernel. This is achieved thanks to the system call attributes and the
associated bitflags, particularly the ruleset's `handled_access_fs`. Making
handled access right explicit enables the kernel and user space to have a clear
contract with each other. This is required to make sure sandboxing will not
get stricter with a system update, which could break applications.
Developers can subscribe to the `Landlock mailing list
<https://subspace.kernel.org/lists.linux.dev.html>`_ to knowingly update and
test their applications with the latest available features. In the interest of
users, and because they may use different kernel versions, it is strongly
encouraged to follow a best-effort security approach by checking the Landlock
ABI version at runtime and only enforcing the supported features.
.. _landlock_abi_versions:
Landlock ABI versions
---------------------
The Landlock ABI version can be read with the sys_landlock_create_ruleset()
system call:
.. code-block:: c
int abi;
abi = landlock_create_ruleset(NULL, 0, LANDLOCK_CREATE_RULESET_VERSION);
if (abi < 0) {
switch (errno) {
case ENOSYS:
printf("Landlock is not supported by the current kernel.\n");
break;
case EOPNOTSUPP:
printf("Landlock is currently disabled.\n");
break;
}
return 0;
}
if (abi >= 2) {
printf("Landlock supports LANDLOCK_ACCESS_FS_REFER.\n");
}
The following kernel interfaces are implicitly supported by the first ABI
version. Features only supported from a specific version are explicitly marked
as such.
Kernel interface
================
@ -228,21 +339,6 @@ Enforcing a ruleset
Current limitations
===================
File renaming and linking
-------------------------
Because Landlock targets unprivileged access controls, it is needed to properly
handle composition of rules. Such property also implies rules nesting.
Properly handling multiple layers of ruleset, each one of them able to restrict
access to files, also implies to inherit the ruleset restrictions from a parent
to its hierarchy. Because files are identified and restricted by their
hierarchy, moving or linking a file from one directory to another implies to
propagate the hierarchy constraints. To protect against privilege escalations
through renaming or linking, and for the sake of simplicity, Landlock currently
limits linking and renaming to the same directory. Future Landlock evolutions
will enable more flexibility for renaming and linking, with dedicated ruleset
flags.
Filesystem topology modification
--------------------------------
@ -267,8 +363,8 @@ restrict such paths with dedicated ruleset flags.
Ruleset layers
--------------
There is a limit of 64 layers of stacked rulesets. This can be an issue for a
task willing to enforce a new ruleset in complement to its 64 inherited
There is a limit of 16 layers of stacked rulesets. This can be an issue for a
task willing to enforce a new ruleset in complement to its 16 inherited
rulesets. Once this limit is reached, sys_landlock_restrict_self() returns
E2BIG. It is then strongly suggested to carefully build rulesets once in the
life of a thread, especially for applications able to launch other applications
@ -281,6 +377,44 @@ Memory usage
Kernel memory allocated to create rulesets is accounted and can be restricted
by the Documentation/admin-guide/cgroup-v1/memory.rst.
Previous limitations
====================
File renaming and linking (ABI 1)
---------------------------------
Because Landlock targets unprivileged access controls, it needs to properly
handle composition of rules. Such property also implies rules nesting.
Properly handling multiple layers of rulesets, each one of them able to
restrict access to files, also implies inheritance of the ruleset restrictions
from a parent to its hierarchy. Because files are identified and restricted by
their hierarchy, moving or linking a file from one directory to another implies
propagation of the hierarchy constraints, or restriction of these actions
according to the potentially lost constraints. To protect against privilege
escalations through renaming or linking, and for the sake of simplicity,
Landlock previously limited linking and renaming to the same directory.
Starting with the Landlock ABI version 2, it is now possible to securely
control renaming and linking thanks to the new `LANDLOCK_ACCESS_FS_REFER`
access right.
.. _kernel_support:
Kernel support
==============
Landlock was first introduced in Linux 5.13 but it must be configured at build
time with `CONFIG_SECURITY_LANDLOCK=y`. Landlock must also be enabled at boot
time as the other security modules. The list of security modules enabled by
default is set with `CONFIG_LSM`. The kernel configuration should then
contains `CONFIG_LSM=landlock,[...]` with `[...]` as the list of other
potentially useful security modules for the running system (see the
`CONFIG_LSM` help).
If the running kernel doesn't have `landlock` in `CONFIG_LSM`, then we can
still enable it by adding ``lsm=landlock,[...]`` to
Documentation/admin-guide/kernel-parameters.rst thanks to the bootloader
configuration.
Questions and answers
=====================

View File

@ -100,7 +100,7 @@ LSM_HOOK(int, 0, path_link, struct dentry *old_dentry,
const struct path *new_dir, struct dentry *new_dentry)
LSM_HOOK(int, 0, path_rename, const struct path *old_dir,
struct dentry *old_dentry, const struct path *new_dir,
struct dentry *new_dentry)
struct dentry *new_dentry, unsigned int flags)
LSM_HOOK(int, 0, path_chmod, const struct path *path, umode_t mode)
LSM_HOOK(int, 0, path_chown, const struct path *path, kuid_t uid, kgid_t gid)
LSM_HOOK(int, 0, path_chroot, const struct path *path)

View File

@ -358,6 +358,7 @@
* @old_dentry contains the dentry structure of the old link.
* @new_dir contains the path structure for parent of the new link.
* @new_dentry contains the dentry structure of the new link.
* @flags may contain rename options such as RENAME_EXCHANGE.
* Return 0 if permission is granted.
* @path_chmod:
* Check for permission to change a mode of the file @path. The new

View File

@ -21,8 +21,14 @@ struct landlock_ruleset_attr {
/**
* @handled_access_fs: Bitmask of actions (cf. `Filesystem flags`_)
* that is handled by this ruleset and should then be forbidden if no
* rule explicitly allow them. This is needed for backward
* compatibility reasons.
* rule explicitly allow them: it is a deny-by-default list that should
* contain as much Landlock access rights as possible. Indeed, all
* Landlock filesystem access rights that are not part of
* handled_access_fs are allowed. This is needed for backward
* compatibility reasons. One exception is the
* LANDLOCK_ACCESS_FS_REFER access right, which is always implicitly
* handled, but must still be explicitly handled to add new rules with
* this access right.
*/
__u64 handled_access_fs;
};
@ -33,7 +39,9 @@ struct landlock_ruleset_attr {
* - %LANDLOCK_CREATE_RULESET_VERSION: Get the highest supported Landlock ABI
* version.
*/
/* clang-format off */
#define LANDLOCK_CREATE_RULESET_VERSION (1U << 0)
/* clang-format on */
/**
* enum landlock_rule_type - Landlock rule type
@ -60,8 +68,9 @@ struct landlock_path_beneath_attr {
*/
__u64 allowed_access;
/**
* @parent_fd: File descriptor, open with ``O_PATH``, which identifies
* the parent directory of a file hierarchy, or just a file.
* @parent_fd: File descriptor, preferably opened with ``O_PATH``,
* which identifies the parent directory of a file hierarchy, or just a
* file.
*/
__s32 parent_fd;
/*
@ -109,6 +118,22 @@ struct landlock_path_beneath_attr {
* - %LANDLOCK_ACCESS_FS_MAKE_FIFO: Create (or rename or link) a named pipe.
* - %LANDLOCK_ACCESS_FS_MAKE_BLOCK: Create (or rename or link) a block device.
* - %LANDLOCK_ACCESS_FS_MAKE_SYM: Create (or rename or link) a symbolic link.
* - %LANDLOCK_ACCESS_FS_REFER: Link or rename a file from or to a different
* directory (i.e. reparent a file hierarchy). This access right is
* available since the second version of the Landlock ABI. This is also the
* only access right which is always considered handled by any ruleset in
* such a way that reparenting a file hierarchy is always denied by default.
* To avoid privilege escalation, it is not enough to add a rule with this
* access right. When linking or renaming a file, the destination directory
* hierarchy must also always have the same or a superset of restrictions of
* the source hierarchy. If it is not the case, or if the domain doesn't
* handle this access right, such actions are denied by default with errno
* set to EXDEV. Linking also requires a LANDLOCK_ACCESS_FS_MAKE_* access
* right on the destination directory, and renaming also requires a
* LANDLOCK_ACCESS_FS_REMOVE_* access right on the source's (file or
* directory) parent. Otherwise, such actions are denied with errno set to
* EACCES. The EACCES errno prevails over EXDEV to let user space
* efficiently deal with an unrecoverable error.
*
* .. warning::
*
@ -120,6 +145,7 @@ struct landlock_path_beneath_attr {
* :manpage:`access(2)`.
* Future Landlock evolutions will enable to restrict them.
*/
/* clang-format off */
#define LANDLOCK_ACCESS_FS_EXECUTE (1ULL << 0)
#define LANDLOCK_ACCESS_FS_WRITE_FILE (1ULL << 1)
#define LANDLOCK_ACCESS_FS_READ_FILE (1ULL << 2)
@ -133,5 +159,7 @@ struct landlock_path_beneath_attr {
#define LANDLOCK_ACCESS_FS_MAKE_FIFO (1ULL << 10)
#define LANDLOCK_ACCESS_FS_MAKE_BLOCK (1ULL << 11)
#define LANDLOCK_ACCESS_FS_MAKE_SYM (1ULL << 12)
#define LANDLOCK_ACCESS_FS_REFER (1ULL << 13)
/* clang-format on */
#endif /* _UAPI_LINUX_LANDLOCK_H */

View File

@ -22,9 +22,9 @@
#include <unistd.h>
#ifndef landlock_create_ruleset
static inline int landlock_create_ruleset(
const struct landlock_ruleset_attr *const attr,
const size_t size, const __u32 flags)
static inline int
landlock_create_ruleset(const struct landlock_ruleset_attr *const attr,
const size_t size, const __u32 flags)
{
return syscall(__NR_landlock_create_ruleset, attr, size, flags);
}
@ -32,17 +32,18 @@ static inline int landlock_create_ruleset(
#ifndef landlock_add_rule
static inline int landlock_add_rule(const int ruleset_fd,
const enum landlock_rule_type rule_type,
const void *const rule_attr, const __u32 flags)
const enum landlock_rule_type rule_type,
const void *const rule_attr,
const __u32 flags)
{
return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type,
rule_attr, flags);
return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type, rule_attr,
flags);
}
#endif
#ifndef landlock_restrict_self
static inline int landlock_restrict_self(const int ruleset_fd,
const __u32 flags)
const __u32 flags)
{
return syscall(__NR_landlock_restrict_self, ruleset_fd, flags);
}
@ -70,14 +71,17 @@ static int parse_path(char *env_path, const char ***const path_list)
return num_paths;
}
/* clang-format off */
#define ACCESS_FILE ( \
LANDLOCK_ACCESS_FS_EXECUTE | \
LANDLOCK_ACCESS_FS_WRITE_FILE | \
LANDLOCK_ACCESS_FS_READ_FILE)
static int populate_ruleset(
const char *const env_var, const int ruleset_fd,
const __u64 allowed_access)
/* clang-format on */
static int populate_ruleset(const char *const env_var, const int ruleset_fd,
const __u64 allowed_access)
{
int num_paths, i, ret = 1;
char *env_path_name;
@ -107,12 +111,10 @@ static int populate_ruleset(
for (i = 0; i < num_paths; i++) {
struct stat statbuf;
path_beneath.parent_fd = open(path_list[i], O_PATH |
O_CLOEXEC);
path_beneath.parent_fd = open(path_list[i], O_PATH | O_CLOEXEC);
if (path_beneath.parent_fd < 0) {
fprintf(stderr, "Failed to open \"%s\": %s\n",
path_list[i],
strerror(errno));
path_list[i], strerror(errno));
goto out_free_name;
}
if (fstat(path_beneath.parent_fd, &statbuf)) {
@ -123,9 +125,10 @@ static int populate_ruleset(
if (!S_ISDIR(statbuf.st_mode))
path_beneath.allowed_access &= ACCESS_FILE;
if (landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
&path_beneath, 0)) {
fprintf(stderr, "Failed to update the ruleset with \"%s\": %s\n",
path_list[i], strerror(errno));
&path_beneath, 0)) {
fprintf(stderr,
"Failed to update the ruleset with \"%s\": %s\n",
path_list[i], strerror(errno));
close(path_beneath.parent_fd);
goto out_free_name;
}
@ -139,6 +142,8 @@ static int populate_ruleset(
return ret;
}
/* clang-format off */
#define ACCESS_FS_ROUGHLY_READ ( \
LANDLOCK_ACCESS_FS_EXECUTE | \
LANDLOCK_ACCESS_FS_READ_FILE | \
@ -154,64 +159,89 @@ static int populate_ruleset(
LANDLOCK_ACCESS_FS_MAKE_SOCK | \
LANDLOCK_ACCESS_FS_MAKE_FIFO | \
LANDLOCK_ACCESS_FS_MAKE_BLOCK | \
LANDLOCK_ACCESS_FS_MAKE_SYM)
LANDLOCK_ACCESS_FS_MAKE_SYM | \
LANDLOCK_ACCESS_FS_REFER)
#define ACCESS_ABI_2 ( \
LANDLOCK_ACCESS_FS_REFER)
/* clang-format on */
int main(const int argc, char *const argv[], char *const *const envp)
{
const char *cmd_path;
char *const *cmd_argv;
int ruleset_fd;
int ruleset_fd, abi;
__u64 access_fs_ro = ACCESS_FS_ROUGHLY_READ,
access_fs_rw = ACCESS_FS_ROUGHLY_READ | ACCESS_FS_ROUGHLY_WRITE;
struct landlock_ruleset_attr ruleset_attr = {
.handled_access_fs = ACCESS_FS_ROUGHLY_READ |
ACCESS_FS_ROUGHLY_WRITE,
.handled_access_fs = access_fs_rw,
};
if (argc < 2) {
fprintf(stderr, "usage: %s=\"...\" %s=\"...\" %s <cmd> [args]...\n\n",
ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
fprintf(stderr, "Launch a command in a restricted environment.\n\n");
fprintf(stderr,
"usage: %s=\"...\" %s=\"...\" %s <cmd> [args]...\n\n",
ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
fprintf(stderr,
"Launch a command in a restricted environment.\n\n");
fprintf(stderr, "Environment variables containing paths, "
"each separated by a colon:\n");
fprintf(stderr, "* %s: list of paths allowed to be used in a read-only way.\n",
ENV_FS_RO_NAME);
fprintf(stderr, "* %s: list of paths allowed to be used in a read-write way.\n",
ENV_FS_RW_NAME);
fprintf(stderr, "\nexample:\n"
"%s=\"/bin:/lib:/usr:/proc:/etc:/dev/urandom\" "
"%s=\"/dev/null:/dev/full:/dev/zero:/dev/pts:/tmp\" "
"%s bash -i\n",
ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
fprintf(stderr,
"* %s: list of paths allowed to be used in a read-only way.\n",
ENV_FS_RO_NAME);
fprintf(stderr,
"* %s: list of paths allowed to be used in a read-write way.\n",
ENV_FS_RW_NAME);
fprintf(stderr,
"\nexample:\n"
"%s=\"/bin:/lib:/usr:/proc:/etc:/dev/urandom\" "
"%s=\"/dev/null:/dev/full:/dev/zero:/dev/pts:/tmp\" "
"%s bash -i\n",
ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]);
return 1;
}
ruleset_fd = landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
if (ruleset_fd < 0) {
abi = landlock_create_ruleset(NULL, 0, LANDLOCK_CREATE_RULESET_VERSION);
if (abi < 0) {
const int err = errno;
perror("Failed to create a ruleset");
perror("Failed to check Landlock compatibility");
switch (err) {
case ENOSYS:
fprintf(stderr, "Hint: Landlock is not supported by the current kernel. "
"To support it, build the kernel with "
"CONFIG_SECURITY_LANDLOCK=y and prepend "
"\"landlock,\" to the content of CONFIG_LSM.\n");
fprintf(stderr,
"Hint: Landlock is not supported by the current kernel. "
"To support it, build the kernel with "
"CONFIG_SECURITY_LANDLOCK=y and prepend "
"\"landlock,\" to the content of CONFIG_LSM.\n");
break;
case EOPNOTSUPP:
fprintf(stderr, "Hint: Landlock is currently disabled. "
"It can be enabled in the kernel configuration by "
"prepending \"landlock,\" to the content of CONFIG_LSM, "
"or at boot time by setting the same content to the "
"\"lsm\" kernel parameter.\n");
fprintf(stderr,
"Hint: Landlock is currently disabled. "
"It can be enabled in the kernel configuration by "
"prepending \"landlock,\" to the content of CONFIG_LSM, "
"or at boot time by setting the same content to the "
"\"lsm\" kernel parameter.\n");
break;
}
return 1;
}
if (populate_ruleset(ENV_FS_RO_NAME, ruleset_fd,
ACCESS_FS_ROUGHLY_READ)) {
/* Best-effort security. */
if (abi < 2) {
ruleset_attr.handled_access_fs &= ~ACCESS_ABI_2;
access_fs_ro &= ~ACCESS_ABI_2;
access_fs_rw &= ~ACCESS_ABI_2;
}
ruleset_fd =
landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
if (ruleset_fd < 0) {
perror("Failed to create a ruleset");
return 1;
}
if (populate_ruleset(ENV_FS_RO_NAME, ruleset_fd, access_fs_ro)) {
goto err_close_ruleset;
}
if (populate_ruleset(ENV_FS_RW_NAME, ruleset_fd,
ACCESS_FS_ROUGHLY_READ | ACCESS_FS_ROUGHLY_WRITE)) {
if (populate_ruleset(ENV_FS_RW_NAME, ruleset_fd, access_fs_rw)) {
goto err_close_ruleset;
}
if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) {
@ -228,7 +258,7 @@ int main(const int argc, char *const argv[], char *const *const envp)
cmd_argv = argv + 1;
execvpe(cmd_path, cmd_argv, envp);
fprintf(stderr, "Failed to execute \"%s\": %s\n", cmd_path,
strerror(errno));
strerror(errno));
fprintf(stderr, "Hint: access to the binary, the interpreter or "
"shared libraries may be denied.\n");
return 1;

View File

@ -354,13 +354,16 @@ static int apparmor_path_link(struct dentry *old_dentry, const struct path *new_
}
static int apparmor_path_rename(const struct path *old_dir, struct dentry *old_dentry,
const struct path *new_dir, struct dentry *new_dentry)
const struct path *new_dir, struct dentry *new_dentry,
const unsigned int flags)
{
struct aa_label *label;
int error = 0;
if (!path_mediated_fs(old_dentry))
return 0;
if ((flags & RENAME_EXCHANGE) && !path_mediated_fs(new_dentry))
return 0;
label = begin_current_label_crit_section();
if (!unconfined(label)) {
@ -374,10 +377,27 @@ static int apparmor_path_rename(const struct path *old_dir, struct dentry *old_d
d_backing_inode(old_dentry)->i_mode
};
error = aa_path_perm(OP_RENAME_SRC, label, &old_path, 0,
MAY_READ | AA_MAY_GETATTR | MAY_WRITE |
AA_MAY_SETATTR | AA_MAY_DELETE,
&cond);
if (flags & RENAME_EXCHANGE) {
struct path_cond cond_exchange = {
i_uid_into_mnt(mnt_userns, d_backing_inode(new_dentry)),
d_backing_inode(new_dentry)->i_mode
};
error = aa_path_perm(OP_RENAME_SRC, label, &new_path, 0,
MAY_READ | AA_MAY_GETATTR | MAY_WRITE |
AA_MAY_SETATTR | AA_MAY_DELETE,
&cond_exchange);
if (!error)
error = aa_path_perm(OP_RENAME_DEST, label, &old_path,
0, MAY_WRITE | AA_MAY_SETATTR |
AA_MAY_CREATE, &cond_exchange);
}
if (!error)
error = aa_path_perm(OP_RENAME_SRC, label, &old_path, 0,
MAY_READ | AA_MAY_GETATTR | MAY_WRITE |
AA_MAY_SETATTR | AA_MAY_DELETE,
&cond);
if (!error)
error = aa_path_perm(OP_RENAME_DEST, label, &new_path,
0, MAY_WRITE | AA_MAY_SETATTR |

View File

@ -15,7 +15,7 @@
#include "setup.h"
static int hook_cred_prepare(struct cred *const new,
const struct cred *const old, const gfp_t gfp)
const struct cred *const old, const gfp_t gfp)
{
struct landlock_ruleset *const old_dom = landlock_cred(old)->domain;
@ -42,5 +42,5 @@ static struct security_hook_list landlock_hooks[] __lsm_ro_after_init = {
__init void landlock_add_cred_hooks(void)
{
security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks),
LANDLOCK_NAME);
LANDLOCK_NAME);
}

View File

@ -20,8 +20,8 @@ struct landlock_cred_security {
struct landlock_ruleset *domain;
};
static inline struct landlock_cred_security *landlock_cred(
const struct cred *cred)
static inline struct landlock_cred_security *
landlock_cred(const struct cred *cred)
{
return cred->security + landlock_blob_sizes.lbs_cred;
}
@ -34,8 +34,8 @@ static inline const struct landlock_ruleset *landlock_get_current_domain(void)
/*
* The call needs to come from an RCU read-side critical section.
*/
static inline const struct landlock_ruleset *landlock_get_task_domain(
const struct task_struct *const task)
static inline const struct landlock_ruleset *
landlock_get_task_domain(const struct task_struct *const task)
{
return landlock_cred(__task_cred(task))->domain;
}

View File

@ -4,6 +4,7 @@
*
* Copyright © 2016-2020 Mickaël Salaün <mic@digikod.net>
* Copyright © 2018-2020 ANSSI
* Copyright © 2021-2022 Microsoft Corporation
*/
#include <linux/atomic.h>
@ -141,23 +142,26 @@ static struct landlock_object *get_inode_object(struct inode *const inode)
}
/* All access rights that can be tied to files. */
/* clang-format off */
#define ACCESS_FILE ( \
LANDLOCK_ACCESS_FS_EXECUTE | \
LANDLOCK_ACCESS_FS_WRITE_FILE | \
LANDLOCK_ACCESS_FS_READ_FILE)
/* clang-format on */
/*
* @path: Should have been checked by get_path_from_fd().
*/
int landlock_append_fs_rule(struct landlock_ruleset *const ruleset,
const struct path *const path, u32 access_rights)
const struct path *const path,
access_mask_t access_rights)
{
int err;
struct landlock_object *object;
/* Files only get access rights that make sense. */
if (!d_is_dir(path->dentry) && (access_rights | ACCESS_FILE) !=
ACCESS_FILE)
if (!d_is_dir(path->dentry) &&
(access_rights | ACCESS_FILE) != ACCESS_FILE)
return -EINVAL;
if (WARN_ON_ONCE(ruleset->num_layers != 1))
return -EINVAL;
@ -180,84 +184,352 @@ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset,
/* Access-control management */
static inline u64 unmask_layers(
const struct landlock_ruleset *const domain,
const struct path *const path, const u32 access_request,
u64 layer_mask)
/*
* The lifetime of the returned rule is tied to @domain.
*
* Returns NULL if no rule is found or if @dentry is negative.
*/
static inline const struct landlock_rule *
find_rule(const struct landlock_ruleset *const domain,
const struct dentry *const dentry)
{
const struct landlock_rule *rule;
const struct inode *inode;
size_t i;
if (d_is_negative(path->dentry))
/* Ignore nonexistent leafs. */
return layer_mask;
inode = d_backing_inode(path->dentry);
/* Ignores nonexistent leafs. */
if (d_is_negative(dentry))
return NULL;
inode = d_backing_inode(dentry);
rcu_read_lock();
rule = landlock_find_rule(domain,
rcu_dereference(landlock_inode(inode)->object));
rule = landlock_find_rule(
domain, rcu_dereference(landlock_inode(inode)->object));
rcu_read_unlock();
return rule;
}
/*
* @layer_masks is read and may be updated according to the access request and
* the matching rule.
*
* Returns true if the request is allowed (i.e. relevant layer masks for the
* request are empty).
*/
static inline bool
unmask_layers(const struct landlock_rule *const rule,
const access_mask_t access_request,
layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS])
{
size_t layer_level;
if (!access_request || !layer_masks)
return true;
if (!rule)
return layer_mask;
return false;
/*
* An access is granted if, for each policy layer, at least one rule
* encountered on the pathwalk grants the requested accesses,
* regardless of their position in the layer stack. We must then check
* encountered on the pathwalk grants the requested access,
* regardless of its position in the layer stack. We must then check
* the remaining layers for each inode, from the first added layer to
* the last one.
* the last one. When there is multiple requested accesses, for each
* policy layer, the full set of requested accesses may not be granted
* by only one rule, but by the union (binary OR) of multiple rules.
* E.g. /a/b <execute> + /a <read> => /a/b <execute + read>
*/
for (i = 0; i < rule->num_layers; i++) {
const struct landlock_layer *const layer = &rule->layers[i];
const u64 layer_level = BIT_ULL(layer->level - 1);
for (layer_level = 0; layer_level < rule->num_layers; layer_level++) {
const struct landlock_layer *const layer =
&rule->layers[layer_level];
const layer_mask_t layer_bit = BIT_ULL(layer->level - 1);
const unsigned long access_req = access_request;
unsigned long access_bit;
bool is_empty;
/* Checks that the layer grants access to the full request. */
if ((layer->access & access_request) == access_request) {
layer_mask &= ~layer_level;
if (layer_mask == 0)
return layer_mask;
/*
* Records in @layer_masks which layer grants access to each
* requested access.
*/
is_empty = true;
for_each_set_bit(access_bit, &access_req,
ARRAY_SIZE(*layer_masks)) {
if (layer->access & BIT_ULL(access_bit))
(*layer_masks)[access_bit] &= ~layer_bit;
is_empty = is_empty && !(*layer_masks)[access_bit];
}
if (is_empty)
return true;
}
return layer_mask;
return false;
}
static int check_access_path(const struct landlock_ruleset *const domain,
const struct path *const path, u32 access_request)
/*
* Allows access to pseudo filesystems that will never be mountable (e.g.
* sockfs, pipefs), but can still be reachable through
* /proc/<pid>/fd/<file-descriptor>
*/
static inline bool is_nouser_or_private(const struct dentry *dentry)
{
bool allowed = false;
struct path walker_path;
u64 layer_mask;
size_t i;
return (dentry->d_sb->s_flags & SB_NOUSER) ||
(d_is_positive(dentry) &&
unlikely(IS_PRIVATE(d_backing_inode(dentry))));
}
/* Make sure all layers can be checked. */
BUILD_BUG_ON(BITS_PER_TYPE(layer_mask) < LANDLOCK_MAX_NUM_LAYERS);
static inline access_mask_t
get_handled_accesses(const struct landlock_ruleset *const domain)
{
access_mask_t access_dom = 0;
unsigned long access_bit;
for (access_bit = 0; access_bit < LANDLOCK_NUM_ACCESS_FS;
access_bit++) {
size_t layer_level;
for (layer_level = 0; layer_level < domain->num_layers;
layer_level++) {
if (domain->fs_access_masks[layer_level] &
BIT_ULL(access_bit)) {
access_dom |= BIT_ULL(access_bit);
break;
}
}
}
return access_dom;
}
static inline access_mask_t
init_layer_masks(const struct landlock_ruleset *const domain,
const access_mask_t access_request,
layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS])
{
access_mask_t handled_accesses = 0;
size_t layer_level;
memset(layer_masks, 0, sizeof(*layer_masks));
/* An empty access request can happen because of O_WRONLY | O_RDWR. */
if (!access_request)
return 0;
/* Saves all handled accesses per layer. */
for (layer_level = 0; layer_level < domain->num_layers; layer_level++) {
const unsigned long access_req = access_request;
unsigned long access_bit;
for_each_set_bit(access_bit, &access_req,
ARRAY_SIZE(*layer_masks)) {
if (domain->fs_access_masks[layer_level] &
BIT_ULL(access_bit)) {
(*layer_masks)[access_bit] |=
BIT_ULL(layer_level);
handled_accesses |= BIT_ULL(access_bit);
}
}
}
return handled_accesses;
}
/*
* Check that a destination file hierarchy has more restrictions than a source
* file hierarchy. This is only used for link and rename actions.
*
* @layer_masks_child2: Optional child masks.
*/
static inline bool no_more_access(
const layer_mask_t (*const layer_masks_parent1)[LANDLOCK_NUM_ACCESS_FS],
const layer_mask_t (*const layer_masks_child1)[LANDLOCK_NUM_ACCESS_FS],
const bool child1_is_directory,
const layer_mask_t (*const layer_masks_parent2)[LANDLOCK_NUM_ACCESS_FS],
const layer_mask_t (*const layer_masks_child2)[LANDLOCK_NUM_ACCESS_FS],
const bool child2_is_directory)
{
unsigned long access_bit;
for (access_bit = 0; access_bit < ARRAY_SIZE(*layer_masks_parent2);
access_bit++) {
/* Ignores accesses that only make sense for directories. */
const bool is_file_access =
!!(BIT_ULL(access_bit) & ACCESS_FILE);
if (child1_is_directory || is_file_access) {
/*
* Checks if the destination restrictions are a
* superset of the source ones (i.e. inherited access
* rights without child exceptions):
* restrictions(parent2) >= restrictions(child1)
*/
if ((((*layer_masks_parent1)[access_bit] &
(*layer_masks_child1)[access_bit]) |
(*layer_masks_parent2)[access_bit]) !=
(*layer_masks_parent2)[access_bit])
return false;
}
if (!layer_masks_child2)
continue;
if (child2_is_directory || is_file_access) {
/*
* Checks inverted restrictions for RENAME_EXCHANGE:
* restrictions(parent1) >= restrictions(child2)
*/
if ((((*layer_masks_parent2)[access_bit] &
(*layer_masks_child2)[access_bit]) |
(*layer_masks_parent1)[access_bit]) !=
(*layer_masks_parent1)[access_bit])
return false;
}
}
return true;
}
/*
* Removes @layer_masks accesses that are not requested.
*
* Returns true if the request is allowed, false otherwise.
*/
static inline bool
scope_to_request(const access_mask_t access_request,
layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS])
{
const unsigned long access_req = access_request;
unsigned long access_bit;
if (WARN_ON_ONCE(!layer_masks))
return true;
for_each_clear_bit(access_bit, &access_req, ARRAY_SIZE(*layer_masks))
(*layer_masks)[access_bit] = 0;
return !memchr_inv(layer_masks, 0, sizeof(*layer_masks));
}
/*
* Returns true if there is at least one access right different than
* LANDLOCK_ACCESS_FS_REFER.
*/
static inline bool
is_eacces(const layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS],
const access_mask_t access_request)
{
unsigned long access_bit;
/* LANDLOCK_ACCESS_FS_REFER alone must return -EXDEV. */
const unsigned long access_check = access_request &
~LANDLOCK_ACCESS_FS_REFER;
if (!layer_masks)
return false;
for_each_set_bit(access_bit, &access_check, ARRAY_SIZE(*layer_masks)) {
if ((*layer_masks)[access_bit])
return true;
}
return false;
}
/**
* check_access_path_dual - Check accesses for requests with a common path
*
* @domain: Domain to check against.
* @path: File hierarchy to walk through.
* @access_request_parent1: Accesses to check, once @layer_masks_parent1 is
* equal to @layer_masks_parent2 (if any). This is tied to the unique
* requested path for most actions, or the source in case of a refer action
* (i.e. rename or link), or the source and destination in case of
* RENAME_EXCHANGE.
* @layer_masks_parent1: Pointer to a matrix of layer masks per access
* masks, identifying the layers that forbid a specific access. Bits from
* this matrix can be unset according to the @path walk. An empty matrix
* means that @domain allows all possible Landlock accesses (i.e. not only
* those identified by @access_request_parent1). This matrix can
* initially refer to domain layer masks and, when the accesses for the
* destination and source are the same, to requested layer masks.
* @dentry_child1: Dentry to the initial child of the parent1 path. This
* pointer must be NULL for non-refer actions (i.e. not link nor rename).
* @access_request_parent2: Similar to @access_request_parent1 but for a
* request involving a source and a destination. This refers to the
* destination, except in case of RENAME_EXCHANGE where it also refers to
* the source. Must be set to 0 when using a simple path request.
* @layer_masks_parent2: Similar to @layer_masks_parent1 but for a refer
* action. This must be NULL otherwise.
* @dentry_child2: Dentry to the initial child of the parent2 path. This
* pointer is only set for RENAME_EXCHANGE actions and must be NULL
* otherwise.
*
* This helper first checks that the destination has a superset of restrictions
* compared to the source (if any) for a common path. Because of
* RENAME_EXCHANGE actions, source and destinations may be swapped. It then
* checks that the collected accesses and the remaining ones are enough to
* allow the request.
*
* Returns:
* - 0 if the access request is granted;
* - -EACCES if it is denied because of access right other than
* LANDLOCK_ACCESS_FS_REFER;
* - -EXDEV if the renaming or linking would be a privileged escalation
* (according to each layered policies), or if LANDLOCK_ACCESS_FS_REFER is
* not allowed by the source or the destination.
*/
static int check_access_path_dual(
const struct landlock_ruleset *const domain,
const struct path *const path,
const access_mask_t access_request_parent1,
layer_mask_t (*const layer_masks_parent1)[LANDLOCK_NUM_ACCESS_FS],
const struct dentry *const dentry_child1,
const access_mask_t access_request_parent2,
layer_mask_t (*const layer_masks_parent2)[LANDLOCK_NUM_ACCESS_FS],
const struct dentry *const dentry_child2)
{
bool allowed_parent1 = false, allowed_parent2 = false, is_dom_check,
child1_is_directory = true, child2_is_directory = true;
struct path walker_path;
access_mask_t access_masked_parent1, access_masked_parent2;
layer_mask_t _layer_masks_child1[LANDLOCK_NUM_ACCESS_FS],
_layer_masks_child2[LANDLOCK_NUM_ACCESS_FS];
layer_mask_t(*layer_masks_child1)[LANDLOCK_NUM_ACCESS_FS] = NULL,
(*layer_masks_child2)[LANDLOCK_NUM_ACCESS_FS] = NULL;
if (!access_request_parent1 && !access_request_parent2)
return 0;
if (WARN_ON_ONCE(!domain || !path))
return 0;
/*
* Allows access to pseudo filesystems that will never be mountable
* (e.g. sockfs, pipefs), but can still be reachable through
* /proc/<pid>/fd/<file-descriptor> .
*/
if ((path->dentry->d_sb->s_flags & SB_NOUSER) ||
(d_is_positive(path->dentry) &&
unlikely(IS_PRIVATE(d_backing_inode(path->dentry)))))
if (is_nouser_or_private(path->dentry))
return 0;
if (WARN_ON_ONCE(domain->num_layers < 1))
if (WARN_ON_ONCE(domain->num_layers < 1 || !layer_masks_parent1))
return -EACCES;
/* Saves all layers handling a subset of requested accesses. */
layer_mask = 0;
for (i = 0; i < domain->num_layers; i++) {
if (domain->fs_access_masks[i] & access_request)
layer_mask |= BIT_ULL(i);
if (unlikely(layer_masks_parent2)) {
if (WARN_ON_ONCE(!dentry_child1))
return -EACCES;
/*
* For a double request, first check for potential privilege
* escalation by looking at domain handled accesses (which are
* a superset of the meaningful requested accesses).
*/
access_masked_parent1 = access_masked_parent2 =
get_handled_accesses(domain);
is_dom_check = true;
} else {
if (WARN_ON_ONCE(dentry_child1 || dentry_child2))
return -EACCES;
/* For a simple request, only check for requested accesses. */
access_masked_parent1 = access_request_parent1;
access_masked_parent2 = access_request_parent2;
is_dom_check = false;
}
if (unlikely(dentry_child1)) {
unmask_layers(find_rule(domain, dentry_child1),
init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS,
&_layer_masks_child1),
&_layer_masks_child1);
layer_masks_child1 = &_layer_masks_child1;
child1_is_directory = d_is_dir(dentry_child1);
}
if (unlikely(dentry_child2)) {
unmask_layers(find_rule(domain, dentry_child2),
init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS,
&_layer_masks_child2),
&_layer_masks_child2);
layer_masks_child2 = &_layer_masks_child2;
child2_is_directory = d_is_dir(dentry_child2);
}
/* An access request not handled by the domain is allowed. */
if (layer_mask == 0)
return 0;
walker_path = *path;
path_get(&walker_path);
@ -267,15 +539,54 @@ static int check_access_path(const struct landlock_ruleset *const domain,
*/
while (true) {
struct dentry *parent_dentry;
const struct landlock_rule *rule;
layer_mask = unmask_layers(domain, &walker_path,
access_request, layer_mask);
if (layer_mask == 0) {
/* Stops when a rule from each layer grants access. */
allowed = true;
break;
/*
* If at least all accesses allowed on the destination are
* already allowed on the source, respectively if there is at
* least as much as restrictions on the destination than on the
* source, then we can safely refer files from the source to
* the destination without risking a privilege escalation.
* This also applies in the case of RENAME_EXCHANGE, which
* implies checks on both direction. This is crucial for
* standalone multilayered security policies. Furthermore,
* this helps avoid policy writers to shoot themselves in the
* foot.
*/
if (unlikely(is_dom_check &&
no_more_access(
layer_masks_parent1, layer_masks_child1,
child1_is_directory, layer_masks_parent2,
layer_masks_child2,
child2_is_directory))) {
allowed_parent1 = scope_to_request(
access_request_parent1, layer_masks_parent1);
allowed_parent2 = scope_to_request(
access_request_parent2, layer_masks_parent2);
/* Stops when all accesses are granted. */
if (allowed_parent1 && allowed_parent2)
break;
/*
* Now, downgrades the remaining checks from domain
* handled accesses to requested accesses.
*/
is_dom_check = false;
access_masked_parent1 = access_request_parent1;
access_masked_parent2 = access_request_parent2;
}
rule = find_rule(domain, walker_path.dentry);
allowed_parent1 = unmask_layers(rule, access_masked_parent1,
layer_masks_parent1);
allowed_parent2 = unmask_layers(rule, access_masked_parent2,
layer_masks_parent2);
/* Stops when a rule from each layer grants access. */
if (allowed_parent1 && allowed_parent2)
break;
jump_up:
if (walker_path.dentry == walker_path.mnt->mnt_root) {
if (follow_up(&walker_path)) {
@ -286,7 +597,6 @@ static int check_access_path(const struct landlock_ruleset *const domain,
* Stops at the real root. Denies access
* because not all layers have granted access.
*/
allowed = false;
break;
}
}
@ -296,7 +606,8 @@ static int check_access_path(const struct landlock_ruleset *const domain,
* access to internal filesystems (e.g. nsfs, which is
* reachable through /proc/<pid>/ns/<namespace>).
*/
allowed = !!(walker_path.mnt->mnt_flags & MNT_INTERNAL);
allowed_parent1 = allowed_parent2 =
!!(walker_path.mnt->mnt_flags & MNT_INTERNAL);
break;
}
parent_dentry = dget_parent(walker_path.dentry);
@ -304,11 +615,40 @@ static int check_access_path(const struct landlock_ruleset *const domain,
walker_path.dentry = parent_dentry;
}
path_put(&walker_path);
return allowed ? 0 : -EACCES;
if (allowed_parent1 && allowed_parent2)
return 0;
/*
* This prioritizes EACCES over EXDEV for all actions, including
* renames with RENAME_EXCHANGE.
*/
if (likely(is_eacces(layer_masks_parent1, access_request_parent1) ||
is_eacces(layer_masks_parent2, access_request_parent2)))
return -EACCES;
/*
* Gracefully forbids reparenting if the destination directory
* hierarchy is not a superset of restrictions of the source directory
* hierarchy, or if LANDLOCK_ACCESS_FS_REFER is not allowed by the
* source or the destination.
*/
return -EXDEV;
}
static inline int check_access_path(const struct landlock_ruleset *const domain,
const struct path *const path,
access_mask_t access_request)
{
layer_mask_t layer_masks[LANDLOCK_NUM_ACCESS_FS] = {};
access_request = init_layer_masks(domain, access_request, &layer_masks);
return check_access_path_dual(domain, path, access_request,
&layer_masks, NULL, 0, NULL, NULL);
}
static inline int current_check_access_path(const struct path *const path,
const u32 access_request)
const access_mask_t access_request)
{
const struct landlock_ruleset *const dom =
landlock_get_current_domain();
@ -318,6 +658,239 @@ static inline int current_check_access_path(const struct path *const path,
return check_access_path(dom, path, access_request);
}
static inline access_mask_t get_mode_access(const umode_t mode)
{
switch (mode & S_IFMT) {
case S_IFLNK:
return LANDLOCK_ACCESS_FS_MAKE_SYM;
case 0:
/* A zero mode translates to S_IFREG. */
case S_IFREG:
return LANDLOCK_ACCESS_FS_MAKE_REG;
case S_IFDIR:
return LANDLOCK_ACCESS_FS_MAKE_DIR;
case S_IFCHR:
return LANDLOCK_ACCESS_FS_MAKE_CHAR;
case S_IFBLK:
return LANDLOCK_ACCESS_FS_MAKE_BLOCK;
case S_IFIFO:
return LANDLOCK_ACCESS_FS_MAKE_FIFO;
case S_IFSOCK:
return LANDLOCK_ACCESS_FS_MAKE_SOCK;
default:
WARN_ON_ONCE(1);
return 0;
}
}
static inline access_mask_t maybe_remove(const struct dentry *const dentry)
{
if (d_is_negative(dentry))
return 0;
return d_is_dir(dentry) ? LANDLOCK_ACCESS_FS_REMOVE_DIR :
LANDLOCK_ACCESS_FS_REMOVE_FILE;
}
/**
* collect_domain_accesses - Walk through a file path and collect accesses
*
* @domain: Domain to check against.
* @mnt_root: Last directory to check.
* @dir: Directory to start the walk from.
* @layer_masks_dom: Where to store the collected accesses.
*
* This helper is useful to begin a path walk from the @dir directory to a
* @mnt_root directory used as a mount point. This mount point is the common
* ancestor between the source and the destination of a renamed and linked
* file. While walking from @dir to @mnt_root, we record all the domain's
* allowed accesses in @layer_masks_dom.
*
* This is similar to check_access_path_dual() but much simpler because it only
* handles walking on the same mount point and only check one set of accesses.
*
* Returns:
* - true if all the domain access rights are allowed for @dir;
* - false if the walk reached @mnt_root.
*/
static bool collect_domain_accesses(
const struct landlock_ruleset *const domain,
const struct dentry *const mnt_root, struct dentry *dir,
layer_mask_t (*const layer_masks_dom)[LANDLOCK_NUM_ACCESS_FS])
{
unsigned long access_dom;
bool ret = false;
if (WARN_ON_ONCE(!domain || !mnt_root || !dir || !layer_masks_dom))
return true;
if (is_nouser_or_private(dir))
return true;
access_dom = init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS,
layer_masks_dom);
dget(dir);
while (true) {
struct dentry *parent_dentry;
/* Gets all layers allowing all domain accesses. */
if (unmask_layers(find_rule(domain, dir), access_dom,
layer_masks_dom)) {
/*
* Stops when all handled accesses are allowed by at
* least one rule in each layer.
*/
ret = true;
break;
}
/* We should not reach a root other than @mnt_root. */
if (dir == mnt_root || WARN_ON_ONCE(IS_ROOT(dir)))
break;
parent_dentry = dget_parent(dir);
dput(dir);
dir = parent_dentry;
}
dput(dir);
return ret;
}
/**
* current_check_refer_path - Check if a rename or link action is allowed
*
* @old_dentry: File or directory requested to be moved or linked.
* @new_dir: Destination parent directory.
* @new_dentry: Destination file or directory.
* @removable: Sets to true if it is a rename operation.
* @exchange: Sets to true if it is a rename operation with RENAME_EXCHANGE.
*
* Because of its unprivileged constraints, Landlock relies on file hierarchies
* (and not only inodes) to tie access rights to files. Being able to link or
* rename a file hierarchy brings some challenges. Indeed, moving or linking a
* file (i.e. creating a new reference to an inode) can have an impact on the
* actions allowed for a set of files if it would change its parent directory
* (i.e. reparenting).
*
* To avoid trivial access right bypasses, Landlock first checks if the file or
* directory requested to be moved would gain new access rights inherited from
* its new hierarchy. Before returning any error, Landlock then checks that
* the parent source hierarchy and the destination hierarchy would allow the
* link or rename action. If it is not the case, an error with EACCES is
* returned to inform user space that there is no way to remove or create the
* requested source file type. If it should be allowed but the new inherited
* access rights would be greater than the source access rights, then the
* kernel returns an error with EXDEV. Prioritizing EACCES over EXDEV enables
* user space to abort the whole operation if there is no way to do it, or to
* manually copy the source to the destination if this remains allowed, e.g.
* because file creation is allowed on the destination directory but not direct
* linking.
*
* To achieve this goal, the kernel needs to compare two file hierarchies: the
* one identifying the source file or directory (including itself), and the
* destination one. This can be seen as a multilayer partial ordering problem.
* The kernel walks through these paths and collects in a matrix the access
* rights that are denied per layer. These matrices are then compared to see
* if the destination one has more (or the same) restrictions as the source
* one. If this is the case, the requested action will not return EXDEV, which
* doesn't mean the action is allowed. The parent hierarchy of the source
* (i.e. parent directory), and the destination hierarchy must also be checked
* to verify that they explicitly allow such action (i.e. referencing,
* creation and potentially removal rights). The kernel implementation is then
* required to rely on potentially four matrices of access rights: one for the
* source file or directory (i.e. the child), a potentially other one for the
* other source/destination (in case of RENAME_EXCHANGE), one for the source
* parent hierarchy and a last one for the destination hierarchy. These
* ephemeral matrices take some space on the stack, which limits the number of
* layers to a deemed reasonable number: 16.
*
* Returns:
* - 0 if access is allowed;
* - -EXDEV if @old_dentry would inherit new access rights from @new_dir;
* - -EACCES if file removal or creation is denied.
*/
static int current_check_refer_path(struct dentry *const old_dentry,
const struct path *const new_dir,
struct dentry *const new_dentry,
const bool removable, const bool exchange)
{
const struct landlock_ruleset *const dom =
landlock_get_current_domain();
bool allow_parent1, allow_parent2;
access_mask_t access_request_parent1, access_request_parent2;
struct path mnt_dir;
layer_mask_t layer_masks_parent1[LANDLOCK_NUM_ACCESS_FS],
layer_masks_parent2[LANDLOCK_NUM_ACCESS_FS];
if (!dom)
return 0;
if (WARN_ON_ONCE(dom->num_layers < 1))
return -EACCES;
if (unlikely(d_is_negative(old_dentry)))
return -ENOENT;
if (exchange) {
if (unlikely(d_is_negative(new_dentry)))
return -ENOENT;
access_request_parent1 =
get_mode_access(d_backing_inode(new_dentry)->i_mode);
} else {
access_request_parent1 = 0;
}
access_request_parent2 =
get_mode_access(d_backing_inode(old_dentry)->i_mode);
if (removable) {
access_request_parent1 |= maybe_remove(old_dentry);
access_request_parent2 |= maybe_remove(new_dentry);
}
/* The mount points are the same for old and new paths, cf. EXDEV. */
if (old_dentry->d_parent == new_dir->dentry) {
/*
* The LANDLOCK_ACCESS_FS_REFER access right is not required
* for same-directory referer (i.e. no reparenting).
*/
access_request_parent1 = init_layer_masks(
dom, access_request_parent1 | access_request_parent2,
&layer_masks_parent1);
return check_access_path_dual(dom, new_dir,
access_request_parent1,
&layer_masks_parent1, NULL, 0,
NULL, NULL);
}
/* Backward compatibility: no reparenting support. */
if (!(get_handled_accesses(dom) & LANDLOCK_ACCESS_FS_REFER))
return -EXDEV;
access_request_parent1 |= LANDLOCK_ACCESS_FS_REFER;
access_request_parent2 |= LANDLOCK_ACCESS_FS_REFER;
/* Saves the common mount point. */
mnt_dir.mnt = new_dir->mnt;
mnt_dir.dentry = new_dir->mnt->mnt_root;
/* new_dir->dentry is equal to new_dentry->d_parent */
allow_parent1 = collect_domain_accesses(dom, mnt_dir.dentry,
old_dentry->d_parent,
&layer_masks_parent1);
allow_parent2 = collect_domain_accesses(
dom, mnt_dir.dentry, new_dir->dentry, &layer_masks_parent2);
if (allow_parent1 && allow_parent2)
return 0;
/*
* To be able to compare source and destination domain access rights,
* take into account the @old_dentry access rights aggregated with its
* parent access rights. This will be useful to compare with the
* destination parent access rights.
*/
return check_access_path_dual(dom, &mnt_dir, access_request_parent1,
&layer_masks_parent1, old_dentry,
access_request_parent2,
&layer_masks_parent2,
exchange ? new_dentry : NULL);
}
/* Inode hooks */
static void hook_inode_free_security(struct inode *const inode)
@ -436,8 +1009,8 @@ static void hook_sb_delete(struct super_block *const sb)
if (prev_inode)
iput(prev_inode);
/* Waits for pending iput() in release_inode(). */
wait_var_event(&landlock_superblock(sb)->inode_refs, !atomic_long_read(
&landlock_superblock(sb)->inode_refs));
wait_var_event(&landlock_superblock(sb)->inode_refs,
!atomic_long_read(&landlock_superblock(sb)->inode_refs));
}
/*
@ -459,8 +1032,8 @@ static void hook_sb_delete(struct super_block *const sb)
* a dedicated user space option would be required (e.g. as a ruleset flag).
*/
static int hook_sb_mount(const char *const dev_name,
const struct path *const path, const char *const type,
const unsigned long flags, void *const data)
const struct path *const path, const char *const type,
const unsigned long flags, void *const data)
{
if (!landlock_get_current_domain())
return 0;
@ -468,7 +1041,7 @@ static int hook_sb_mount(const char *const dev_name,
}
static int hook_move_mount(const struct path *const from_path,
const struct path *const to_path)
const struct path *const to_path)
{
if (!landlock_get_current_domain())
return 0;
@ -502,7 +1075,7 @@ static int hook_sb_remount(struct super_block *const sb, void *const mnt_opts)
* view of the filesystem.
*/
static int hook_sb_pivotroot(const struct path *const old_path,
const struct path *const new_path)
const struct path *const new_path)
{
if (!landlock_get_current_domain())
return 0;
@ -511,97 +1084,34 @@ static int hook_sb_pivotroot(const struct path *const old_path,
/* Path hooks */
static inline u32 get_mode_access(const umode_t mode)
{
switch (mode & S_IFMT) {
case S_IFLNK:
return LANDLOCK_ACCESS_FS_MAKE_SYM;
case 0:
/* A zero mode translates to S_IFREG. */
case S_IFREG:
return LANDLOCK_ACCESS_FS_MAKE_REG;
case S_IFDIR:
return LANDLOCK_ACCESS_FS_MAKE_DIR;
case S_IFCHR:
return LANDLOCK_ACCESS_FS_MAKE_CHAR;
case S_IFBLK:
return LANDLOCK_ACCESS_FS_MAKE_BLOCK;
case S_IFIFO:
return LANDLOCK_ACCESS_FS_MAKE_FIFO;
case S_IFSOCK:
return LANDLOCK_ACCESS_FS_MAKE_SOCK;
default:
WARN_ON_ONCE(1);
return 0;
}
}
/*
* Creating multiple links or renaming may lead to privilege escalations if not
* handled properly. Indeed, we must be sure that the source doesn't gain more
* privileges by being accessible from the destination. This is getting more
* complex when dealing with multiple layers. The whole picture can be seen as
* a multilayer partial ordering problem. A future version of Landlock will
* deal with that.
*/
static int hook_path_link(struct dentry *const old_dentry,
const struct path *const new_dir,
struct dentry *const new_dentry)
const struct path *const new_dir,
struct dentry *const new_dentry)
{
const struct landlock_ruleset *const dom =
landlock_get_current_domain();
if (!dom)
return 0;
/* The mount points are the same for old and new paths, cf. EXDEV. */
if (old_dentry->d_parent != new_dir->dentry)
/* Gracefully forbids reparenting. */
return -EXDEV;
if (unlikely(d_is_negative(old_dentry)))
return -ENOENT;
return check_access_path(dom, new_dir,
get_mode_access(d_backing_inode(old_dentry)->i_mode));
}
static inline u32 maybe_remove(const struct dentry *const dentry)
{
if (d_is_negative(dentry))
return 0;
return d_is_dir(dentry) ? LANDLOCK_ACCESS_FS_REMOVE_DIR :
LANDLOCK_ACCESS_FS_REMOVE_FILE;
return current_check_refer_path(old_dentry, new_dir, new_dentry, false,
false);
}
static int hook_path_rename(const struct path *const old_dir,
struct dentry *const old_dentry,
const struct path *const new_dir,
struct dentry *const new_dentry)
struct dentry *const old_dentry,
const struct path *const new_dir,
struct dentry *const new_dentry,
const unsigned int flags)
{
const struct landlock_ruleset *const dom =
landlock_get_current_domain();
if (!dom)
return 0;
/* The mount points are the same for old and new paths, cf. EXDEV. */
if (old_dir->dentry != new_dir->dentry)
/* Gracefully forbids reparenting. */
return -EXDEV;
if (unlikely(d_is_negative(old_dentry)))
return -ENOENT;
/* RENAME_EXCHANGE is handled because directories are the same. */
return check_access_path(dom, old_dir, maybe_remove(old_dentry) |
maybe_remove(new_dentry) |
get_mode_access(d_backing_inode(old_dentry)->i_mode));
/* old_dir refers to old_dentry->d_parent and new_dir->mnt */
return current_check_refer_path(old_dentry, new_dir, new_dentry, true,
!!(flags & RENAME_EXCHANGE));
}
static int hook_path_mkdir(const struct path *const dir,
struct dentry *const dentry, const umode_t mode)
struct dentry *const dentry, const umode_t mode)
{
return current_check_access_path(dir, LANDLOCK_ACCESS_FS_MAKE_DIR);
}
static int hook_path_mknod(const struct path *const dir,
struct dentry *const dentry, const umode_t mode,
const unsigned int dev)
struct dentry *const dentry, const umode_t mode,
const unsigned int dev)
{
const struct landlock_ruleset *const dom =
landlock_get_current_domain();
@ -612,28 +1122,29 @@ static int hook_path_mknod(const struct path *const dir,
}
static int hook_path_symlink(const struct path *const dir,
struct dentry *const dentry, const char *const old_name)
struct dentry *const dentry,
const char *const old_name)
{
return current_check_access_path(dir, LANDLOCK_ACCESS_FS_MAKE_SYM);
}
static int hook_path_unlink(const struct path *const dir,
struct dentry *const dentry)
struct dentry *const dentry)
{
return current_check_access_path(dir, LANDLOCK_ACCESS_FS_REMOVE_FILE);
}
static int hook_path_rmdir(const struct path *const dir,
struct dentry *const dentry)
struct dentry *const dentry)
{
return current_check_access_path(dir, LANDLOCK_ACCESS_FS_REMOVE_DIR);
}
/* File hooks */
static inline u32 get_file_access(const struct file *const file)
static inline access_mask_t get_file_access(const struct file *const file)
{
u32 access = 0;
access_mask_t access = 0;
if (file->f_mode & FMODE_READ) {
/* A directory can only be opened in read mode. */
@ -688,5 +1199,5 @@ static struct security_hook_list landlock_hooks[] __lsm_ro_after_init = {
__init void landlock_add_fs_hooks(void)
{
security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks),
LANDLOCK_NAME);
LANDLOCK_NAME);
}

View File

@ -50,14 +50,14 @@ struct landlock_superblock_security {
atomic_long_t inode_refs;
};
static inline struct landlock_inode_security *landlock_inode(
const struct inode *const inode)
static inline struct landlock_inode_security *
landlock_inode(const struct inode *const inode)
{
return inode->i_security + landlock_blob_sizes.lbs_inode;
}
static inline struct landlock_superblock_security *landlock_superblock(
const struct super_block *const superblock)
static inline struct landlock_superblock_security *
landlock_superblock(const struct super_block *const superblock)
{
return superblock->s_security + landlock_blob_sizes.lbs_superblock;
}
@ -65,6 +65,7 @@ static inline struct landlock_superblock_security *landlock_superblock(
__init void landlock_add_fs_hooks(void);
int landlock_append_fs_rule(struct landlock_ruleset *const ruleset,
const struct path *const path, u32 access_hierarchy);
const struct path *const path,
access_mask_t access_hierarchy);
#endif /* _SECURITY_LANDLOCK_FS_H */

View File

@ -9,13 +9,19 @@
#ifndef _SECURITY_LANDLOCK_LIMITS_H
#define _SECURITY_LANDLOCK_LIMITS_H
#include <linux/bitops.h>
#include <linux/limits.h>
#include <uapi/linux/landlock.h>
#define LANDLOCK_MAX_NUM_LAYERS 64
/* clang-format off */
#define LANDLOCK_MAX_NUM_LAYERS 16
#define LANDLOCK_MAX_NUM_RULES U32_MAX
#define LANDLOCK_LAST_ACCESS_FS LANDLOCK_ACCESS_FS_MAKE_SYM
#define LANDLOCK_LAST_ACCESS_FS LANDLOCK_ACCESS_FS_REFER
#define LANDLOCK_MASK_ACCESS_FS ((LANDLOCK_LAST_ACCESS_FS << 1) - 1)
#define LANDLOCK_NUM_ACCESS_FS __const_hweight64(LANDLOCK_MASK_ACCESS_FS)
/* clang-format on */
#endif /* _SECURITY_LANDLOCK_LIMITS_H */

View File

@ -17,9 +17,9 @@
#include "object.h"
struct landlock_object *landlock_create_object(
const struct landlock_object_underops *const underops,
void *const underobj)
struct landlock_object *
landlock_create_object(const struct landlock_object_underops *const underops,
void *const underobj)
{
struct landlock_object *new_object;

View File

@ -76,9 +76,9 @@ struct landlock_object {
};
};
struct landlock_object *landlock_create_object(
const struct landlock_object_underops *const underops,
void *const underobj);
struct landlock_object *
landlock_create_object(const struct landlock_object_underops *const underops,
void *const underobj);
void landlock_put_object(struct landlock_object *const object);

View File

@ -30,7 +30,7 @@
* means a subset of) the @child domain.
*/
static bool domain_scope_le(const struct landlock_ruleset *const parent,
const struct landlock_ruleset *const child)
const struct landlock_ruleset *const child)
{
const struct landlock_hierarchy *walker;
@ -48,7 +48,7 @@ static bool domain_scope_le(const struct landlock_ruleset *const parent,
}
static bool task_is_scoped(const struct task_struct *const parent,
const struct task_struct *const child)
const struct task_struct *const child)
{
bool is_scoped;
const struct landlock_ruleset *dom_parent, *dom_child;
@ -62,7 +62,7 @@ static bool task_is_scoped(const struct task_struct *const parent,
}
static int task_ptrace(const struct task_struct *const parent,
const struct task_struct *const child)
const struct task_struct *const child)
{
/* Quick return for non-landlocked tasks. */
if (!landlocked(parent))
@ -86,7 +86,7 @@ static int task_ptrace(const struct task_struct *const parent,
* granted, -errno if denied.
*/
static int hook_ptrace_access_check(struct task_struct *const child,
const unsigned int mode)
const unsigned int mode)
{
return task_ptrace(current, child);
}
@ -116,5 +116,5 @@ static struct security_hook_list landlock_hooks[] __lsm_ro_after_init = {
__init void landlock_add_ptrace_hooks(void)
{
security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks),
LANDLOCK_NAME);
LANDLOCK_NAME);
}

View File

@ -28,8 +28,9 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers)
{
struct landlock_ruleset *new_ruleset;
new_ruleset = kzalloc(struct_size(new_ruleset, fs_access_masks,
num_layers), GFP_KERNEL_ACCOUNT);
new_ruleset =
kzalloc(struct_size(new_ruleset, fs_access_masks, num_layers),
GFP_KERNEL_ACCOUNT);
if (!new_ruleset)
return ERR_PTR(-ENOMEM);
refcount_set(&new_ruleset->usage, 1);
@ -44,7 +45,8 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers)
return new_ruleset;
}
struct landlock_ruleset *landlock_create_ruleset(const u32 fs_access_mask)
struct landlock_ruleset *
landlock_create_ruleset(const access_mask_t fs_access_mask)
{
struct landlock_ruleset *new_ruleset;
@ -66,11 +68,10 @@ static void build_check_rule(void)
BUILD_BUG_ON(rule.num_layers < LANDLOCK_MAX_NUM_LAYERS);
}
static struct landlock_rule *create_rule(
struct landlock_object *const object,
const struct landlock_layer (*const layers)[],
const u32 num_layers,
const struct landlock_layer *const new_layer)
static struct landlock_rule *
create_rule(struct landlock_object *const object,
const struct landlock_layer (*const layers)[], const u32 num_layers,
const struct landlock_layer *const new_layer)
{
struct landlock_rule *new_rule;
u32 new_num_layers;
@ -85,7 +86,7 @@ static struct landlock_rule *create_rule(
new_num_layers = num_layers;
}
new_rule = kzalloc(struct_size(new_rule, layers, new_num_layers),
GFP_KERNEL_ACCOUNT);
GFP_KERNEL_ACCOUNT);
if (!new_rule)
return ERR_PTR(-ENOMEM);
RB_CLEAR_NODE(&new_rule->node);
@ -94,7 +95,7 @@ static struct landlock_rule *create_rule(
new_rule->num_layers = new_num_layers;
/* Copies the original layer stack. */
memcpy(new_rule->layers, layers,
flex_array_size(new_rule, layers, num_layers));
flex_array_size(new_rule, layers, num_layers));
if (new_layer)
/* Adds a copy of @new_layer on the layer stack. */
new_rule->layers[new_rule->num_layers - 1] = *new_layer;
@ -142,9 +143,9 @@ static void build_check_ruleset(void)
* access rights.
*/
static int insert_rule(struct landlock_ruleset *const ruleset,
struct landlock_object *const object,
const struct landlock_layer (*const layers)[],
size_t num_layers)
struct landlock_object *const object,
const struct landlock_layer (*const layers)[],
size_t num_layers)
{
struct rb_node **walker_node;
struct rb_node *parent_node = NULL;
@ -156,8 +157,8 @@ static int insert_rule(struct landlock_ruleset *const ruleset,
return -ENOENT;
walker_node = &(ruleset->root.rb_node);
while (*walker_node) {
struct landlock_rule *const this = rb_entry(*walker_node,
struct landlock_rule, node);
struct landlock_rule *const this =
rb_entry(*walker_node, struct landlock_rule, node);
if (this->object != object) {
parent_node = *walker_node;
@ -194,7 +195,7 @@ static int insert_rule(struct landlock_ruleset *const ruleset,
* ruleset and a domain.
*/
new_rule = create_rule(object, &this->layers, this->num_layers,
&(*layers)[0]);
&(*layers)[0]);
if (IS_ERR(new_rule))
return PTR_ERR(new_rule);
rb_replace_node(&this->node, &new_rule->node, &ruleset->root);
@ -228,13 +229,14 @@ static void build_check_layer(void)
/* @ruleset must be locked by the caller. */
int landlock_insert_rule(struct landlock_ruleset *const ruleset,
struct landlock_object *const object, const u32 access)
struct landlock_object *const object,
const access_mask_t access)
{
struct landlock_layer layers[] = {{
struct landlock_layer layers[] = { {
.access = access,
/* When @level is zero, insert_rule() extends @ruleset. */
.level = 0,
}};
} };
build_check_layer();
return insert_rule(ruleset, object, &layers, ARRAY_SIZE(layers));
@ -257,7 +259,7 @@ static void put_hierarchy(struct landlock_hierarchy *hierarchy)
}
static int merge_ruleset(struct landlock_ruleset *const dst,
struct landlock_ruleset *const src)
struct landlock_ruleset *const src)
{
struct landlock_rule *walker_rule, *next_rule;
int err = 0;
@ -282,11 +284,11 @@ static int merge_ruleset(struct landlock_ruleset *const dst,
dst->fs_access_masks[dst->num_layers - 1] = src->fs_access_masks[0];
/* Merges the @src tree. */
rbtree_postorder_for_each_entry_safe(walker_rule, next_rule,
&src->root, node) {
struct landlock_layer layers[] = {{
rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, &src->root,
node) {
struct landlock_layer layers[] = { {
.level = dst->num_layers,
}};
} };
if (WARN_ON_ONCE(walker_rule->num_layers != 1)) {
err = -EINVAL;
@ -298,7 +300,7 @@ static int merge_ruleset(struct landlock_ruleset *const dst,
}
layers[0].access = walker_rule->layers[0].access;
err = insert_rule(dst, walker_rule->object, &layers,
ARRAY_SIZE(layers));
ARRAY_SIZE(layers));
if (err)
goto out_unlock;
}
@ -310,7 +312,7 @@ static int merge_ruleset(struct landlock_ruleset *const dst,
}
static int inherit_ruleset(struct landlock_ruleset *const parent,
struct landlock_ruleset *const child)
struct landlock_ruleset *const child)
{
struct landlock_rule *walker_rule, *next_rule;
int err = 0;
@ -325,9 +327,10 @@ static int inherit_ruleset(struct landlock_ruleset *const parent,
/* Copies the @parent tree. */
rbtree_postorder_for_each_entry_safe(walker_rule, next_rule,
&parent->root, node) {
&parent->root, node) {
err = insert_rule(child, walker_rule->object,
&walker_rule->layers, walker_rule->num_layers);
&walker_rule->layers,
walker_rule->num_layers);
if (err)
goto out_unlock;
}
@ -338,7 +341,7 @@ static int inherit_ruleset(struct landlock_ruleset *const parent,
}
/* Copies the parent layer stack and leaves a space for the new layer. */
memcpy(child->fs_access_masks, parent->fs_access_masks,
flex_array_size(parent, fs_access_masks, parent->num_layers));
flex_array_size(parent, fs_access_masks, parent->num_layers));
if (WARN_ON_ONCE(!parent->hierarchy)) {
err = -EINVAL;
@ -358,8 +361,7 @@ static void free_ruleset(struct landlock_ruleset *const ruleset)
struct landlock_rule *freeme, *next;
might_sleep();
rbtree_postorder_for_each_entry_safe(freeme, next, &ruleset->root,
node)
rbtree_postorder_for_each_entry_safe(freeme, next, &ruleset->root, node)
free_rule(freeme);
put_hierarchy(ruleset->hierarchy);
kfree(ruleset);
@ -397,9 +399,9 @@ void landlock_put_ruleset_deferred(struct landlock_ruleset *const ruleset)
* Returns the intersection of @parent and @ruleset, or returns @parent if
* @ruleset is empty, or returns a duplicate of @ruleset if @parent is empty.
*/
struct landlock_ruleset *landlock_merge_ruleset(
struct landlock_ruleset *const parent,
struct landlock_ruleset *const ruleset)
struct landlock_ruleset *
landlock_merge_ruleset(struct landlock_ruleset *const parent,
struct landlock_ruleset *const ruleset)
{
struct landlock_ruleset *new_dom;
u32 num_layers;
@ -421,8 +423,8 @@ struct landlock_ruleset *landlock_merge_ruleset(
new_dom = create_ruleset(num_layers);
if (IS_ERR(new_dom))
return new_dom;
new_dom->hierarchy = kzalloc(sizeof(*new_dom->hierarchy),
GFP_KERNEL_ACCOUNT);
new_dom->hierarchy =
kzalloc(sizeof(*new_dom->hierarchy), GFP_KERNEL_ACCOUNT);
if (!new_dom->hierarchy) {
err = -ENOMEM;
goto out_put_dom;
@ -449,9 +451,9 @@ struct landlock_ruleset *landlock_merge_ruleset(
/*
* The returned access has the same lifetime as @ruleset.
*/
const struct landlock_rule *landlock_find_rule(
const struct landlock_ruleset *const ruleset,
const struct landlock_object *const object)
const struct landlock_rule *
landlock_find_rule(const struct landlock_ruleset *const ruleset,
const struct landlock_object *const object)
{
const struct rb_node *node;
@ -459,8 +461,8 @@ const struct landlock_rule *landlock_find_rule(
return NULL;
node = ruleset->root.rb_node;
while (node) {
struct landlock_rule *this = rb_entry(node,
struct landlock_rule, node);
struct landlock_rule *this =
rb_entry(node, struct landlock_rule, node);
if (this->object == object)
return this;

View File

@ -9,13 +9,26 @@
#ifndef _SECURITY_LANDLOCK_RULESET_H
#define _SECURITY_LANDLOCK_RULESET_H
#include <linux/bitops.h>
#include <linux/build_bug.h>
#include <linux/mutex.h>
#include <linux/rbtree.h>
#include <linux/refcount.h>
#include <linux/workqueue.h>
#include "limits.h"
#include "object.h"
typedef u16 access_mask_t;
/* Makes sure all filesystem access rights can be stored. */
static_assert(BITS_PER_TYPE(access_mask_t) >= LANDLOCK_NUM_ACCESS_FS);
/* Makes sure for_each_set_bit() and for_each_clear_bit() calls are OK. */
static_assert(sizeof(unsigned long) >= sizeof(access_mask_t));
typedef u16 layer_mask_t;
/* Makes sure all layers can be checked. */
static_assert(BITS_PER_TYPE(layer_mask_t) >= LANDLOCK_MAX_NUM_LAYERS);
/**
* struct landlock_layer - Access rights for a given layer
*/
@ -28,7 +41,7 @@ struct landlock_layer {
* @access: Bitfield of allowed actions on the kernel object. They are
* relative to the object type (e.g. %LANDLOCK_ACTION_FS_READ).
*/
u16 access;
access_mask_t access;
};
/**
@ -135,26 +148,28 @@ struct landlock_ruleset {
* layers are set once and never changed for the
* lifetime of the ruleset.
*/
u16 fs_access_masks[];
access_mask_t fs_access_masks[];
};
};
};
struct landlock_ruleset *landlock_create_ruleset(const u32 fs_access_mask);
struct landlock_ruleset *
landlock_create_ruleset(const access_mask_t fs_access_mask);
void landlock_put_ruleset(struct landlock_ruleset *const ruleset);
void landlock_put_ruleset_deferred(struct landlock_ruleset *const ruleset);
int landlock_insert_rule(struct landlock_ruleset *const ruleset,
struct landlock_object *const object, const u32 access);
struct landlock_object *const object,
const access_mask_t access);
struct landlock_ruleset *landlock_merge_ruleset(
struct landlock_ruleset *const parent,
struct landlock_ruleset *const ruleset);
struct landlock_ruleset *
landlock_merge_ruleset(struct landlock_ruleset *const parent,
struct landlock_ruleset *const ruleset);
const struct landlock_rule *landlock_find_rule(
const struct landlock_ruleset *const ruleset,
const struct landlock_object *const object);
const struct landlock_rule *
landlock_find_rule(const struct landlock_ruleset *const ruleset,
const struct landlock_object *const object);
static inline void landlock_get_ruleset(struct landlock_ruleset *const ruleset)
{

View File

@ -43,9 +43,10 @@
* @src: User space pointer or NULL.
* @usize: (Alleged) size of the data pointed to by @src.
*/
static __always_inline int copy_min_struct_from_user(void *const dst,
const size_t ksize, const size_t ksize_min,
const void __user *const src, const size_t usize)
static __always_inline int
copy_min_struct_from_user(void *const dst, const size_t ksize,
const size_t ksize_min, const void __user *const src,
const size_t usize)
{
/* Checks buffer inconsistencies. */
BUILD_BUG_ON(!dst);
@ -93,7 +94,7 @@ static void build_check_abi(void)
/* Ruleset handling */
static int fop_ruleset_release(struct inode *const inode,
struct file *const filp)
struct file *const filp)
{
struct landlock_ruleset *ruleset = filp->private_data;
@ -102,15 +103,15 @@ static int fop_ruleset_release(struct inode *const inode,
}
static ssize_t fop_dummy_read(struct file *const filp, char __user *const buf,
const size_t size, loff_t *const ppos)
const size_t size, loff_t *const ppos)
{
/* Dummy handler to enable FMODE_CAN_READ. */
return -EINVAL;
}
static ssize_t fop_dummy_write(struct file *const filp,
const char __user *const buf, const size_t size,
loff_t *const ppos)
const char __user *const buf, const size_t size,
loff_t *const ppos)
{
/* Dummy handler to enable FMODE_CAN_WRITE. */
return -EINVAL;
@ -128,7 +129,7 @@ static const struct file_operations ruleset_fops = {
.write = fop_dummy_write,
};
#define LANDLOCK_ABI_VERSION 1
#define LANDLOCK_ABI_VERSION 2
/**
* sys_landlock_create_ruleset - Create a new ruleset
@ -168,22 +169,23 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
return -EOPNOTSUPP;
if (flags) {
if ((flags == LANDLOCK_CREATE_RULESET_VERSION)
&& !attr && !size)
if ((flags == LANDLOCK_CREATE_RULESET_VERSION) && !attr &&
!size)
return LANDLOCK_ABI_VERSION;
return -EINVAL;
}
/* Copies raw user space buffer. */
err = copy_min_struct_from_user(&ruleset_attr, sizeof(ruleset_attr),
offsetofend(typeof(ruleset_attr), handled_access_fs),
attr, size);
offsetofend(typeof(ruleset_attr),
handled_access_fs),
attr, size);
if (err)
return err;
/* Checks content (and 32-bits cast). */
if ((ruleset_attr.handled_access_fs | LANDLOCK_MASK_ACCESS_FS) !=
LANDLOCK_MASK_ACCESS_FS)
LANDLOCK_MASK_ACCESS_FS)
return -EINVAL;
/* Checks arguments and transforms to kernel struct. */
@ -193,7 +195,7 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
/* Creates anonymous FD referring to the ruleset. */
ruleset_fd = anon_inode_getfd("[landlock-ruleset]", &ruleset_fops,
ruleset, O_RDWR | O_CLOEXEC);
ruleset, O_RDWR | O_CLOEXEC);
if (ruleset_fd < 0)
landlock_put_ruleset(ruleset);
return ruleset_fd;
@ -204,7 +206,7 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
* landlock_put_ruleset() on the return value.
*/
static struct landlock_ruleset *get_ruleset_from_fd(const int fd,
const fmode_t mode)
const fmode_t mode)
{
struct fd ruleset_f;
struct landlock_ruleset *ruleset;
@ -244,8 +246,8 @@ static int get_path_from_fd(const s32 fd, struct path *const path)
struct fd f;
int err = 0;
BUILD_BUG_ON(!__same_type(fd,
((struct landlock_path_beneath_attr *)NULL)->parent_fd));
BUILD_BUG_ON(!__same_type(
fd, ((struct landlock_path_beneath_attr *)NULL)->parent_fd));
/* Handles O_PATH. */
f = fdget_raw(fd);
@ -257,10 +259,10 @@ static int get_path_from_fd(const s32 fd, struct path *const path)
* pipefs).
*/
if ((f.file->f_op == &ruleset_fops) ||
(f.file->f_path.mnt->mnt_flags & MNT_INTERNAL) ||
(f.file->f_path.dentry->d_sb->s_flags & SB_NOUSER) ||
d_is_negative(f.file->f_path.dentry) ||
IS_PRIVATE(d_backing_inode(f.file->f_path.dentry))) {
(f.file->f_path.mnt->mnt_flags & MNT_INTERNAL) ||
(f.file->f_path.dentry->d_sb->s_flags & SB_NOUSER) ||
d_is_negative(f.file->f_path.dentry) ||
IS_PRIVATE(d_backing_inode(f.file->f_path.dentry))) {
err = -EBADFD;
goto out_fdput;
}
@ -290,19 +292,18 @@ static int get_path_from_fd(const s32 fd, struct path *const path)
*
* - EOPNOTSUPP: Landlock is supported by the kernel but disabled at boot time;
* - EINVAL: @flags is not 0, or inconsistent access in the rule (i.e.
* &landlock_path_beneath_attr.allowed_access is not a subset of the rule's
* accesses);
* &landlock_path_beneath_attr.allowed_access is not a subset of the
* ruleset handled accesses);
* - ENOMSG: Empty accesses (e.g. &landlock_path_beneath_attr.allowed_access);
* - EBADF: @ruleset_fd is not a file descriptor for the current thread, or a
* member of @rule_attr is not a file descriptor as expected;
* - EBADFD: @ruleset_fd is not a ruleset file descriptor, or a member of
* @rule_attr is not the expected file descriptor type (e.g. file open
* without O_PATH);
* @rule_attr is not the expected file descriptor type;
* - EPERM: @ruleset_fd has no write access to the underlying ruleset;
* - EFAULT: @rule_attr inconsistency.
*/
SYSCALL_DEFINE4(landlock_add_rule,
const int, ruleset_fd, const enum landlock_rule_type, rule_type,
SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd,
const enum landlock_rule_type, rule_type,
const void __user *const, rule_attr, const __u32, flags)
{
struct landlock_path_beneath_attr path_beneath_attr;
@ -317,20 +318,24 @@ SYSCALL_DEFINE4(landlock_add_rule,
if (flags)
return -EINVAL;
if (rule_type != LANDLOCK_RULE_PATH_BENEATH)
return -EINVAL;
/* Copies raw user space buffer, only one type for now. */
res = copy_from_user(&path_beneath_attr, rule_attr,
sizeof(path_beneath_attr));
if (res)
return -EFAULT;
/* Gets and checks the ruleset. */
ruleset = get_ruleset_from_fd(ruleset_fd, FMODE_CAN_WRITE);
if (IS_ERR(ruleset))
return PTR_ERR(ruleset);
if (rule_type != LANDLOCK_RULE_PATH_BENEATH) {
err = -EINVAL;
goto out_put_ruleset;
}
/* Copies raw user space buffer, only one type for now. */
res = copy_from_user(&path_beneath_attr, rule_attr,
sizeof(path_beneath_attr));
if (res) {
err = -EFAULT;
goto out_put_ruleset;
}
/*
* Informs about useless rule: empty allowed_access (i.e. deny rules)
* are ignored in path walks.
@ -344,7 +349,7 @@ SYSCALL_DEFINE4(landlock_add_rule,
* (ruleset->fs_access_masks[0] is automatically upgraded to 64-bits).
*/
if ((path_beneath_attr.allowed_access | ruleset->fs_access_masks[0]) !=
ruleset->fs_access_masks[0]) {
ruleset->fs_access_masks[0]) {
err = -EINVAL;
goto out_put_ruleset;
}
@ -356,7 +361,7 @@ SYSCALL_DEFINE4(landlock_add_rule,
/* Imports the new rule. */
err = landlock_append_fs_rule(ruleset, &path,
path_beneath_attr.allowed_access);
path_beneath_attr.allowed_access);
path_put(&path);
out_put_ruleset:
@ -389,8 +394,8 @@ SYSCALL_DEFINE4(landlock_add_rule,
* - E2BIG: The maximum number of stacked rulesets is reached for the current
* thread.
*/
SYSCALL_DEFINE2(landlock_restrict_self,
const int, ruleset_fd, const __u32, flags)
SYSCALL_DEFINE2(landlock_restrict_self, const int, ruleset_fd, const __u32,
flags)
{
struct landlock_ruleset *new_dom, *ruleset;
struct cred *new_cred;
@ -400,18 +405,18 @@ SYSCALL_DEFINE2(landlock_restrict_self,
if (!landlock_initialized)
return -EOPNOTSUPP;
/* No flag for now. */
if (flags)
return -EINVAL;
/*
* Similar checks as for seccomp(2), except that an -EPERM may be
* returned.
*/
if (!task_no_new_privs(current) &&
!ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN))
!ns_capable_noaudit(current_user_ns(), CAP_SYS_ADMIN))
return -EPERM;
/* No flag for now. */
if (flags)
return -EINVAL;
/* Gets and checks the ruleset. */
ruleset = get_ruleset_from_fd(ruleset_fd, FMODE_CAN_READ);
if (IS_ERR(ruleset))

View File

@ -1198,15 +1198,8 @@ int security_path_rename(const struct path *old_dir, struct dentry *old_dentry,
(d_is_positive(new_dentry) && IS_PRIVATE(d_backing_inode(new_dentry)))))
return 0;
if (flags & RENAME_EXCHANGE) {
int err = call_int_hook(path_rename, 0, new_dir, new_dentry,
old_dir, old_dentry);
if (err)
return err;
}
return call_int_hook(path_rename, 0, old_dir, old_dentry, new_dir,
new_dentry);
new_dentry, flags);
}
EXPORT_SYMBOL(security_path_rename);

View File

@ -264,17 +264,26 @@ static int tomoyo_path_link(struct dentry *old_dentry, const struct path *new_di
* @old_dentry: Pointer to "struct dentry".
* @new_parent: Pointer to "struct path".
* @new_dentry: Pointer to "struct dentry".
* @flags: Rename options.
*
* Returns 0 on success, negative value otherwise.
*/
static int tomoyo_path_rename(const struct path *old_parent,
struct dentry *old_dentry,
const struct path *new_parent,
struct dentry *new_dentry)
struct dentry *new_dentry,
const unsigned int flags)
{
struct path path1 = { .mnt = old_parent->mnt, .dentry = old_dentry };
struct path path2 = { .mnt = new_parent->mnt, .dentry = new_dentry };
if (flags & RENAME_EXCHANGE) {
const int err = tomoyo_path2_perm(TOMOYO_TYPE_RENAME, &path2,
&path1);
if (err)
return err;
}
return tomoyo_path2_perm(TOMOYO_TYPE_RENAME, &path1, &path2);
}

View File

@ -18,10 +18,11 @@
#include "common.h"
#ifndef O_PATH
#define O_PATH 010000000
#define O_PATH 010000000
#endif
TEST(inconsistent_attr) {
TEST(inconsistent_attr)
{
const long page_size = sysconf(_SC_PAGESIZE);
char *const buf = malloc(page_size + 1);
struct landlock_ruleset_attr *const ruleset_attr = (void *)buf;
@ -34,20 +35,26 @@ TEST(inconsistent_attr) {
ASSERT_EQ(EINVAL, errno);
ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, 1, 0));
ASSERT_EQ(EINVAL, errno);
ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, 7, 0));
ASSERT_EQ(EINVAL, errno);
ASSERT_EQ(-1, landlock_create_ruleset(NULL, 1, 0));
/* The size if less than sizeof(struct landlock_attr_enforce). */
ASSERT_EQ(EFAULT, errno);
ASSERT_EQ(-1, landlock_create_ruleset(NULL,
sizeof(struct landlock_ruleset_attr), 0));
ASSERT_EQ(-1, landlock_create_ruleset(
NULL, sizeof(struct landlock_ruleset_attr), 0));
ASSERT_EQ(EFAULT, errno);
ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, page_size + 1, 0));
ASSERT_EQ(E2BIG, errno);
ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr,
sizeof(struct landlock_ruleset_attr), 0));
/* Checks minimal valid attribute size. */
ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, 8, 0));
ASSERT_EQ(ENOMSG, errno);
ASSERT_EQ(-1, landlock_create_ruleset(
ruleset_attr,
sizeof(struct landlock_ruleset_attr), 0));
ASSERT_EQ(ENOMSG, errno);
ASSERT_EQ(-1, landlock_create_ruleset(ruleset_attr, page_size, 0));
ASSERT_EQ(ENOMSG, errno);
@ -63,38 +70,44 @@ TEST(inconsistent_attr) {
free(buf);
}
TEST(abi_version) {
TEST(abi_version)
{
const struct landlock_ruleset_attr ruleset_attr = {
.handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE,
};
ASSERT_EQ(1, landlock_create_ruleset(NULL, 0,
LANDLOCK_CREATE_RULESET_VERSION));
ASSERT_EQ(2, landlock_create_ruleset(NULL, 0,
LANDLOCK_CREATE_RULESET_VERSION));
ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 0,
LANDLOCK_CREATE_RULESET_VERSION));
LANDLOCK_CREATE_RULESET_VERSION));
ASSERT_EQ(EINVAL, errno);
ASSERT_EQ(-1, landlock_create_ruleset(NULL, sizeof(ruleset_attr),
LANDLOCK_CREATE_RULESET_VERSION));
LANDLOCK_CREATE_RULESET_VERSION));
ASSERT_EQ(EINVAL, errno);
ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr,
sizeof(ruleset_attr),
LANDLOCK_CREATE_RULESET_VERSION));
ASSERT_EQ(-1,
landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr),
LANDLOCK_CREATE_RULESET_VERSION));
ASSERT_EQ(EINVAL, errno);
ASSERT_EQ(-1, landlock_create_ruleset(NULL, 0,
LANDLOCK_CREATE_RULESET_VERSION | 1 << 31));
LANDLOCK_CREATE_RULESET_VERSION |
1 << 31));
ASSERT_EQ(EINVAL, errno);
}
TEST(inval_create_ruleset_flags) {
/* Tests ordering of syscall argument checks. */
TEST(create_ruleset_checks_ordering)
{
const int last_flag = LANDLOCK_CREATE_RULESET_VERSION;
const int invalid_flag = last_flag << 1;
int ruleset_fd;
const struct landlock_ruleset_attr ruleset_attr = {
.handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE,
};
/* Checks priority for invalid flags. */
ASSERT_EQ(-1, landlock_create_ruleset(NULL, 0, invalid_flag));
ASSERT_EQ(EINVAL, errno);
@ -102,44 +115,121 @@ TEST(inval_create_ruleset_flags) {
ASSERT_EQ(EINVAL, errno);
ASSERT_EQ(-1, landlock_create_ruleset(NULL, sizeof(ruleset_attr),
invalid_flag));
invalid_flag));
ASSERT_EQ(EINVAL, errno);
ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr,
sizeof(ruleset_attr), invalid_flag));
ASSERT_EQ(-1,
landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr),
invalid_flag));
ASSERT_EQ(EINVAL, errno);
}
TEST(empty_path_beneath_attr) {
const struct landlock_ruleset_attr ruleset_attr = {
.handled_access_fs = LANDLOCK_ACCESS_FS_EXECUTE,
};
const int ruleset_fd = landlock_create_ruleset(&ruleset_attr,
sizeof(ruleset_attr), 0);
/* Checks too big ruleset_attr size. */
ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, -1, 0));
ASSERT_EQ(E2BIG, errno);
/* Checks too small ruleset_attr size. */
ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 0, 0));
ASSERT_EQ(EINVAL, errno);
ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 1, 0));
ASSERT_EQ(EINVAL, errno);
/* Checks valid call. */
ruleset_fd =
landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
ASSERT_LE(0, ruleset_fd);
/* Similar to struct landlock_path_beneath_attr.parent_fd = 0 */
ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
NULL, 0));
ASSERT_EQ(EFAULT, errno);
ASSERT_EQ(0, close(ruleset_fd));
}
TEST(inval_fd_enforce) {
ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
/* Tests ordering of syscall argument checks. */
TEST(add_rule_checks_ordering)
{
const struct landlock_ruleset_attr ruleset_attr = {
.handled_access_fs = LANDLOCK_ACCESS_FS_EXECUTE,
};
struct landlock_path_beneath_attr path_beneath_attr = {
.allowed_access = LANDLOCK_ACCESS_FS_EXECUTE,
.parent_fd = -1,
};
const int ruleset_fd =
landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
ASSERT_EQ(-1, landlock_restrict_self(-1, 0));
ASSERT_LE(0, ruleset_fd);
/* Checks invalid flags. */
ASSERT_EQ(-1, landlock_add_rule(-1, 0, NULL, 1));
ASSERT_EQ(EINVAL, errno);
/* Checks invalid ruleset FD. */
ASSERT_EQ(-1, landlock_add_rule(-1, 0, NULL, 0));
ASSERT_EQ(EBADF, errno);
/* Checks invalid rule type. */
ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, 0, NULL, 0));
ASSERT_EQ(EINVAL, errno);
/* Checks invalid rule attr. */
ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
NULL, 0));
ASSERT_EQ(EFAULT, errno);
/* Checks invalid path_beneath.parent_fd. */
ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
&path_beneath_attr, 0));
ASSERT_EQ(EBADF, errno);
/* Checks valid call. */
path_beneath_attr.parent_fd =
open("/tmp", O_PATH | O_NOFOLLOW | O_DIRECTORY | O_CLOEXEC);
ASSERT_LE(0, path_beneath_attr.parent_fd);
ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
&path_beneath_attr, 0));
ASSERT_EQ(0, close(path_beneath_attr.parent_fd));
ASSERT_EQ(0, close(ruleset_fd));
}
TEST(unpriv_enforce_without_no_new_privs) {
int err;
/* Tests ordering of syscall argument and permission checks. */
TEST(restrict_self_checks_ordering)
{
const struct landlock_ruleset_attr ruleset_attr = {
.handled_access_fs = LANDLOCK_ACCESS_FS_EXECUTE,
};
struct landlock_path_beneath_attr path_beneath_attr = {
.allowed_access = LANDLOCK_ACCESS_FS_EXECUTE,
.parent_fd = -1,
};
const int ruleset_fd =
landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
ASSERT_LE(0, ruleset_fd);
path_beneath_attr.parent_fd =
open("/tmp", O_PATH | O_NOFOLLOW | O_DIRECTORY | O_CLOEXEC);
ASSERT_LE(0, path_beneath_attr.parent_fd);
ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_PATH_BENEATH,
&path_beneath_attr, 0));
ASSERT_EQ(0, close(path_beneath_attr.parent_fd));
/* Checks unprivileged enforcement without no_new_privs. */
drop_caps(_metadata);
err = landlock_restrict_self(-1, 0);
ASSERT_EQ(-1, landlock_restrict_self(-1, -1));
ASSERT_EQ(EPERM, errno);
ASSERT_EQ(err, -1);
ASSERT_EQ(-1, landlock_restrict_self(-1, 0));
ASSERT_EQ(EPERM, errno);
ASSERT_EQ(-1, landlock_restrict_self(ruleset_fd, 0));
ASSERT_EQ(EPERM, errno);
ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
/* Checks invalid flags. */
ASSERT_EQ(-1, landlock_restrict_self(-1, -1));
ASSERT_EQ(EINVAL, errno);
/* Checks invalid ruleset FD. */
ASSERT_EQ(-1, landlock_restrict_self(-1, 0));
ASSERT_EQ(EBADF, errno);
/* Checks valid call. */
ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0));
ASSERT_EQ(0, close(ruleset_fd));
}
TEST(ruleset_fd_io)
@ -151,8 +241,8 @@ TEST(ruleset_fd_io)
char buf;
drop_caps(_metadata);
ruleset_fd = landlock_create_ruleset(&ruleset_attr,
sizeof(ruleset_attr), 0);
ruleset_fd =
landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
ASSERT_LE(0, ruleset_fd);
ASSERT_EQ(-1, write(ruleset_fd, ".", 1));
@ -197,14 +287,15 @@ TEST(ruleset_fd_transfer)
drop_caps(_metadata);
/* Creates a test ruleset with a simple rule. */
ruleset_fd_tx = landlock_create_ruleset(&ruleset_attr,
sizeof(ruleset_attr), 0);
ruleset_fd_tx =
landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
ASSERT_LE(0, ruleset_fd_tx);
path_beneath_attr.parent_fd = open("/tmp", O_PATH | O_NOFOLLOW |
O_DIRECTORY | O_CLOEXEC);
path_beneath_attr.parent_fd =
open("/tmp", O_PATH | O_NOFOLLOW | O_DIRECTORY | O_CLOEXEC);
ASSERT_LE(0, path_beneath_attr.parent_fd);
ASSERT_EQ(0, landlock_add_rule(ruleset_fd_tx, LANDLOCK_RULE_PATH_BENEATH,
&path_beneath_attr, 0));
ASSERT_EQ(0,
landlock_add_rule(ruleset_fd_tx, LANDLOCK_RULE_PATH_BENEATH,
&path_beneath_attr, 0));
ASSERT_EQ(0, close(path_beneath_attr.parent_fd));
cmsg = CMSG_FIRSTHDR(&msg);
@ -215,7 +306,8 @@ TEST(ruleset_fd_transfer)
memcpy(CMSG_DATA(cmsg), &ruleset_fd_tx, sizeof(ruleset_fd_tx));
/* Sends the ruleset FD over a socketpair and then close it. */
ASSERT_EQ(0, socketpair(AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0, socket_fds));
ASSERT_EQ(0, socketpair(AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0,
socket_fds));
ASSERT_EQ(sizeof(data_tx), sendmsg(socket_fds[0], &msg, 0));
ASSERT_EQ(0, close(socket_fds[0]));
ASSERT_EQ(0, close(ruleset_fd_tx));
@ -226,7 +318,8 @@ TEST(ruleset_fd_transfer)
int ruleset_fd_rx;
*(char *)msg.msg_iov->iov_base = '\0';
ASSERT_EQ(sizeof(data_tx), recvmsg(socket_fds[1], &msg, MSG_CMSG_CLOEXEC));
ASSERT_EQ(sizeof(data_tx),
recvmsg(socket_fds[1], &msg, MSG_CMSG_CLOEXEC));
ASSERT_EQ('.', *(char *)msg.msg_iov->iov_base);
ASSERT_EQ(0, close(socket_fds[1]));
cmsg = CMSG_FIRSTHDR(&msg);

View File

@ -25,6 +25,7 @@
* this to be possible, we must not call abort() but instead exit smoothly
* (hence the step print).
*/
/* clang-format off */
#define TEST_F_FORK(fixture_name, test_name) \
static void fixture_name##_##test_name##_child( \
struct __test_metadata *_metadata, \
@ -71,11 +72,12 @@
FIXTURE_DATA(fixture_name) __attribute__((unused)) *self, \
const FIXTURE_VARIANT(fixture_name) \
__attribute__((unused)) *variant)
/* clang-format on */
#ifndef landlock_create_ruleset
static inline int landlock_create_ruleset(
const struct landlock_ruleset_attr *const attr,
const size_t size, const __u32 flags)
static inline int
landlock_create_ruleset(const struct landlock_ruleset_attr *const attr,
const size_t size, const __u32 flags)
{
return syscall(__NR_landlock_create_ruleset, attr, size, flags);
}
@ -83,17 +85,18 @@ static inline int landlock_create_ruleset(
#ifndef landlock_add_rule
static inline int landlock_add_rule(const int ruleset_fd,
const enum landlock_rule_type rule_type,
const void *const rule_attr, const __u32 flags)
const enum landlock_rule_type rule_type,
const void *const rule_attr,
const __u32 flags)
{
return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type,
rule_attr, flags);
return syscall(__NR_landlock_add_rule, ruleset_fd, rule_type, rule_attr,
flags);
}
#endif
#ifndef landlock_restrict_self
static inline int landlock_restrict_self(const int ruleset_fd,
const __u32 flags)
const __u32 flags)
{
return syscall(__NR_landlock_restrict_self, ruleset_fd, flags);
}
@ -111,69 +114,76 @@ static void _init_caps(struct __test_metadata *const _metadata, bool drop_all)
};
cap_p = cap_get_proc();
EXPECT_NE(NULL, cap_p) {
EXPECT_NE(NULL, cap_p)
{
TH_LOG("Failed to cap_get_proc: %s", strerror(errno));
}
EXPECT_NE(-1, cap_clear(cap_p)) {
EXPECT_NE(-1, cap_clear(cap_p))
{
TH_LOG("Failed to cap_clear: %s", strerror(errno));
}
if (!drop_all) {
EXPECT_NE(-1, cap_set_flag(cap_p, CAP_PERMITTED,
ARRAY_SIZE(caps), caps, CAP_SET)) {
ARRAY_SIZE(caps), caps, CAP_SET))
{
TH_LOG("Failed to cap_set_flag: %s", strerror(errno));
}
}
EXPECT_NE(-1, cap_set_proc(cap_p)) {
EXPECT_NE(-1, cap_set_proc(cap_p))
{
TH_LOG("Failed to cap_set_proc: %s", strerror(errno));
}
EXPECT_NE(-1, cap_free(cap_p)) {
EXPECT_NE(-1, cap_free(cap_p))
{
TH_LOG("Failed to cap_free: %s", strerror(errno));
}
}
/* We cannot put such helpers in a library because of kselftest_harness.h . */
__attribute__((__unused__))
static void disable_caps(struct __test_metadata *const _metadata)
__attribute__((__unused__)) static void
disable_caps(struct __test_metadata *const _metadata)
{
_init_caps(_metadata, false);
}
__attribute__((__unused__))
static void drop_caps(struct __test_metadata *const _metadata)
__attribute__((__unused__)) static void
drop_caps(struct __test_metadata *const _metadata)
{
_init_caps(_metadata, true);
}
static void _effective_cap(struct __test_metadata *const _metadata,
const cap_value_t caps, const cap_flag_value_t value)
const cap_value_t caps, const cap_flag_value_t value)
{
cap_t cap_p;
cap_p = cap_get_proc();
EXPECT_NE(NULL, cap_p) {
EXPECT_NE(NULL, cap_p)
{
TH_LOG("Failed to cap_get_proc: %s", strerror(errno));
}
EXPECT_NE(-1, cap_set_flag(cap_p, CAP_EFFECTIVE, 1, &caps, value)) {
EXPECT_NE(-1, cap_set_flag(cap_p, CAP_EFFECTIVE, 1, &caps, value))
{
TH_LOG("Failed to cap_set_flag: %s", strerror(errno));
}
EXPECT_NE(-1, cap_set_proc(cap_p)) {
EXPECT_NE(-1, cap_set_proc(cap_p))
{
TH_LOG("Failed to cap_set_proc: %s", strerror(errno));
}
EXPECT_NE(-1, cap_free(cap_p)) {
EXPECT_NE(-1, cap_free(cap_p))
{
TH_LOG("Failed to cap_free: %s", strerror(errno));
}
}
__attribute__((__unused__))
static void set_cap(struct __test_metadata *const _metadata,
const cap_value_t caps)
__attribute__((__unused__)) static void
set_cap(struct __test_metadata *const _metadata, const cap_value_t caps)
{
_effective_cap(_metadata, caps, CAP_SET);
}
__attribute__((__unused__))
static void clear_cap(struct __test_metadata *const _metadata,
const cap_value_t caps)
__attribute__((__unused__)) static void
clear_cap(struct __test_metadata *const _metadata, const cap_value_t caps)
{
_effective_cap(_metadata, caps, CAP_CLEAR);
}

File diff suppressed because it is too large Load Diff

View File

@ -26,9 +26,10 @@ static void create_domain(struct __test_metadata *const _metadata)
.handled_access_fs = LANDLOCK_ACCESS_FS_MAKE_BLOCK,
};
ruleset_fd = landlock_create_ruleset(&ruleset_attr,
sizeof(ruleset_attr), 0);
EXPECT_LE(0, ruleset_fd) {
ruleset_fd =
landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0);
EXPECT_LE(0, ruleset_fd)
{
TH_LOG("Failed to create a ruleset: %s", strerror(errno));
}
EXPECT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0));
@ -43,7 +44,7 @@ static int test_ptrace_read(const pid_t pid)
int procenv_path_size, fd;
procenv_path_size = snprintf(procenv_path, sizeof(procenv_path),
path_template, pid);
path_template, pid);
if (procenv_path_size >= sizeof(procenv_path))
return E2BIG;
@ -59,9 +60,12 @@ static int test_ptrace_read(const pid_t pid)
return 0;
}
FIXTURE(hierarchy) { };
/* clang-format off */
FIXTURE(hierarchy) {};
/* clang-format on */
FIXTURE_VARIANT(hierarchy) {
FIXTURE_VARIANT(hierarchy)
{
const bool domain_both;
const bool domain_parent;
const bool domain_child;
@ -83,7 +87,9 @@ FIXTURE_VARIANT(hierarchy) {
* \ P2 -> P1 : allow
* 'P2
*/
/* clang-format off */
FIXTURE_VARIANT_ADD(hierarchy, allow_without_domain) {
/* clang-format on */
.domain_both = false,
.domain_parent = false,
.domain_child = false,
@ -98,7 +104,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_without_domain) {
* | P2 |
* '------'
*/
/* clang-format off */
FIXTURE_VARIANT_ADD(hierarchy, allow_with_one_domain) {
/* clang-format on */
.domain_both = false,
.domain_parent = false,
.domain_child = true,
@ -112,7 +120,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_with_one_domain) {
* '
* P2
*/
/* clang-format off */
FIXTURE_VARIANT_ADD(hierarchy, deny_with_parent_domain) {
/* clang-format on */
.domain_both = false,
.domain_parent = true,
.domain_child = false,
@ -127,7 +137,9 @@ FIXTURE_VARIANT_ADD(hierarchy, deny_with_parent_domain) {
* | P2 |
* '------'
*/
/* clang-format off */
FIXTURE_VARIANT_ADD(hierarchy, deny_with_sibling_domain) {
/* clang-format on */
.domain_both = false,
.domain_parent = true,
.domain_child = true,
@ -142,7 +154,9 @@ FIXTURE_VARIANT_ADD(hierarchy, deny_with_sibling_domain) {
* | P2 |
* '-------------'
*/
/* clang-format off */
FIXTURE_VARIANT_ADD(hierarchy, allow_sibling_domain) {
/* clang-format on */
.domain_both = true,
.domain_parent = false,
.domain_child = false,
@ -158,7 +172,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_sibling_domain) {
* | '------' |
* '-----------------'
*/
/* clang-format off */
FIXTURE_VARIANT_ADD(hierarchy, allow_with_nested_domain) {
/* clang-format on */
.domain_both = true,
.domain_parent = false,
.domain_child = true,
@ -174,7 +190,9 @@ FIXTURE_VARIANT_ADD(hierarchy, allow_with_nested_domain) {
* | P2 |
* '-----------------'
*/
/* clang-format off */
FIXTURE_VARIANT_ADD(hierarchy, deny_with_nested_and_parent_domain) {
/* clang-format on */
.domain_both = true,
.domain_parent = true,
.domain_child = false,
@ -192,17 +210,21 @@ FIXTURE_VARIANT_ADD(hierarchy, deny_with_nested_and_parent_domain) {
* | '------' |
* '-----------------'
*/
/* clang-format off */
FIXTURE_VARIANT_ADD(hierarchy, deny_with_forked_domain) {
/* clang-format on */
.domain_both = true,
.domain_parent = true,
.domain_child = true,
};
FIXTURE_SETUP(hierarchy)
{ }
{
}
FIXTURE_TEARDOWN(hierarchy)
{ }
{
}
/* Test PTRACE_TRACEME and PTRACE_ATTACH for parent and child. */
TEST_F(hierarchy, trace)
@ -330,7 +352,7 @@ TEST_F(hierarchy, trace)
ASSERT_EQ(1, write(pipe_parent[1], ".", 1));
ASSERT_EQ(child, waitpid(child, &status, 0));
if (WIFSIGNALED(status) || !WIFEXITED(status) ||
WEXITSTATUS(status) != EXIT_SUCCESS)
WEXITSTATUS(status) != EXIT_SUCCESS)
_metadata->passed = 0;
}