License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
2005-04-16 22:20:36 +00:00
|
|
|
#ifndef _LINUX_LIST_H
|
|
|
|
#define _LINUX_LIST_H
|
|
|
|
|
2021-11-09 02:32:19 +00:00
|
|
|
#include <linux/container_of.h>
|
2010-07-02 17:41:14 +00:00
|
|
|
#include <linux/types.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/stddef.h>
|
2006-06-27 09:53:52 +00:00
|
|
|
#include <linux/poison.h>
|
2011-05-19 21:15:29 +00:00
|
|
|
#include <linux/const.h>
|
2021-11-09 02:32:19 +00:00
|
|
|
|
|
|
|
#include <asm/barrier.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/*
|
2020-09-20 13:31:54 +00:00
|
|
|
* Circular doubly linked list implementation.
|
2005-04-16 22:20:36 +00:00
|
|
|
*
|
|
|
|
* Some of the internal functions ("__xxx") are useful when
|
|
|
|
* manipulating whole lists rather than single entries, as
|
|
|
|
* sometimes we already know the next/prev entries and we can
|
|
|
|
* generate better code by using them directly rather than
|
|
|
|
* using the generic single-entry routines.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define LIST_HEAD_INIT(name) { &(name), &(name) }
|
|
|
|
|
|
|
|
#define LIST_HEAD(name) \
|
|
|
|
struct list_head name = LIST_HEAD_INIT(name)
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* INIT_LIST_HEAD - Initialize a list_head structure
|
|
|
|
* @list: list_head structure to be initialized.
|
|
|
|
*
|
|
|
|
* Initializes the list_head to point to itself. If it is a list header,
|
|
|
|
* the result is an empty list.
|
|
|
|
*/
|
2006-02-03 11:03:56 +00:00
|
|
|
static inline void INIT_LIST_HEAD(struct list_head *list)
|
|
|
|
{
|
2015-10-12 23:56:42 +00:00
|
|
|
WRITE_ONCE(list->next, list);
|
2022-04-29 21:38:01 +00:00
|
|
|
WRITE_ONCE(list->prev, list);
|
2006-02-03 11:03:56 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-11 15:18:40 +00:00
|
|
|
#ifdef CONFIG_LIST_HARDENED
|
|
|
|
|
2016-08-17 21:42:08 +00:00
|
|
|
#ifdef CONFIG_DEBUG_LIST
|
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-11 15:18:40 +00:00
|
|
|
# define __list_valid_slowpath
|
|
|
|
#else
|
|
|
|
# define __list_valid_slowpath __cold __preserve_most
|
|
|
|
#endif
|
|
|
|
|
2023-08-11 15:18:39 +00:00
|
|
|
/*
|
|
|
|
* Performs the full set of list corruption checks before __list_add().
|
|
|
|
* On list corruption reports a warning, and returns false.
|
|
|
|
*/
|
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-11 15:18:40 +00:00
|
|
|
extern bool __list_valid_slowpath __list_add_valid_or_report(struct list_head *new,
|
|
|
|
struct list_head *prev,
|
|
|
|
struct list_head *next);
|
2023-08-11 15:18:39 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Performs list corruption checks before __list_add(). Returns false if a
|
|
|
|
* corruption is detected, true otherwise.
|
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-11 15:18:40 +00:00
|
|
|
*
|
|
|
|
* With CONFIG_LIST_HARDENED only, performs minimal list integrity checking
|
|
|
|
* inline to catch non-faulting corruptions, and only if a corruption is
|
|
|
|
* detected calls the reporting function __list_add_valid_or_report().
|
2023-08-11 15:18:39 +00:00
|
|
|
*/
|
|
|
|
static __always_inline bool __list_add_valid(struct list_head *new,
|
|
|
|
struct list_head *prev,
|
|
|
|
struct list_head *next)
|
|
|
|
{
|
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-11 15:18:40 +00:00
|
|
|
bool ret = true;
|
|
|
|
|
|
|
|
if (!IS_ENABLED(CONFIG_DEBUG_LIST)) {
|
|
|
|
/*
|
|
|
|
* With the hardening version, elide checking if next and prev
|
|
|
|
* are NULL, since the immediate dereference of them below would
|
|
|
|
* result in a fault if NULL.
|
|
|
|
*
|
|
|
|
* With the reduced set of checks, we can afford to inline the
|
|
|
|
* checks, which also gives the compiler a chance to elide some
|
|
|
|
* of them completely if they can be proven at compile-time. If
|
|
|
|
* one of the pre-conditions does not hold, the slow-path will
|
|
|
|
* show a report which pre-condition failed.
|
|
|
|
*/
|
|
|
|
if (likely(next->prev == prev && prev->next == next && new != prev && new != next))
|
|
|
|
return true;
|
|
|
|
ret = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret &= __list_add_valid_or_report(new, prev, next);
|
|
|
|
return ret;
|
2023-08-11 15:18:39 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Performs the full set of list corruption checks before __list_del_entry().
|
|
|
|
* On list corruption reports a warning, and returns false.
|
|
|
|
*/
|
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-11 15:18:40 +00:00
|
|
|
extern bool __list_valid_slowpath __list_del_entry_valid_or_report(struct list_head *entry);
|
2023-08-11 15:18:39 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Performs list corruption checks before __list_del_entry(). Returns false if a
|
|
|
|
* corruption is detected, true otherwise.
|
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-11 15:18:40 +00:00
|
|
|
*
|
|
|
|
* With CONFIG_LIST_HARDENED only, performs minimal list integrity checking
|
|
|
|
* inline to catch non-faulting corruptions, and only if a corruption is
|
|
|
|
* detected calls the reporting function __list_del_entry_valid_or_report().
|
2023-08-11 15:18:39 +00:00
|
|
|
*/
|
|
|
|
static __always_inline bool __list_del_entry_valid(struct list_head *entry)
|
|
|
|
{
|
list: Introduce CONFIG_LIST_HARDENED
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/r/20230811151847.1594958-3-elver@google.com
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-11 15:18:40 +00:00
|
|
|
bool ret = true;
|
|
|
|
|
|
|
|
if (!IS_ENABLED(CONFIG_DEBUG_LIST)) {
|
|
|
|
struct list_head *prev = entry->prev;
|
|
|
|
struct list_head *next = entry->next;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* With the hardening version, elide checking if next and prev
|
|
|
|
* are NULL, LIST_POISON1 or LIST_POISON2, since the immediate
|
|
|
|
* dereference of them below would result in a fault.
|
|
|
|
*/
|
|
|
|
if (likely(prev->next == entry && next->prev == entry))
|
|
|
|
return true;
|
|
|
|
ret = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret &= __list_del_entry_valid_or_report(entry);
|
|
|
|
return ret;
|
2023-08-11 15:18:39 +00:00
|
|
|
}
|
2016-08-17 21:42:08 +00:00
|
|
|
#else
|
|
|
|
static inline bool __list_add_valid(struct list_head *new,
|
|
|
|
struct list_head *prev,
|
|
|
|
struct list_head *next)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
2016-08-17 21:42:10 +00:00
|
|
|
static inline bool __list_del_entry_valid(struct list_head *entry)
|
|
|
|
{
|
|
|
|
return true;
|
|
|
|
}
|
2016-08-17 21:42:08 +00:00
|
|
|
#endif
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Insert a new entry between two known consecutive entries.
|
|
|
|
*
|
|
|
|
* This is only for internal list manipulation where we know
|
|
|
|
* the prev/next entries already!
|
|
|
|
*/
|
|
|
|
static inline void __list_add(struct list_head *new,
|
|
|
|
struct list_head *prev,
|
|
|
|
struct list_head *next)
|
|
|
|
{
|
2016-08-17 21:42:08 +00:00
|
|
|
if (!__list_add_valid(new, prev, next))
|
|
|
|
return;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
next->prev = new;
|
|
|
|
new->next = next;
|
|
|
|
new->prev = prev;
|
2015-09-21 05:02:17 +00:00
|
|
|
WRITE_ONCE(prev->next, new);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_add - add a new entry
|
|
|
|
* @new: new entry to be added
|
|
|
|
* @head: list head to add it after
|
|
|
|
*
|
|
|
|
* Insert a new entry after the specified head.
|
|
|
|
* This is good for implementing stacks.
|
|
|
|
*/
|
|
|
|
static inline void list_add(struct list_head *new, struct list_head *head)
|
|
|
|
{
|
|
|
|
__list_add(new, head, head->next);
|
|
|
|
}
|
2006-09-29 08:59:00 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* list_add_tail - add a new entry
|
|
|
|
* @new: new entry to be added
|
|
|
|
* @head: list head to add it before
|
|
|
|
*
|
|
|
|
* Insert a new entry before the specified head.
|
|
|
|
* This is useful for implementing queues.
|
|
|
|
*/
|
|
|
|
static inline void list_add_tail(struct list_head *new, struct list_head *head)
|
|
|
|
{
|
|
|
|
__list_add(new, head->prev, head);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Delete a list entry by making the prev/next entries
|
|
|
|
* point to each other.
|
|
|
|
*
|
|
|
|
* This is only for internal list manipulation where we know
|
|
|
|
* the prev/next entries already!
|
|
|
|
*/
|
|
|
|
static inline void __list_del(struct list_head * prev, struct list_head * next)
|
|
|
|
{
|
|
|
|
next->prev = prev;
|
2015-09-18 15:45:22 +00:00
|
|
|
WRITE_ONCE(prev->next, next);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2019-06-28 09:12:34 +00:00
|
|
|
/*
|
|
|
|
* Delete a list entry and clear the 'prev' pointer.
|
|
|
|
*
|
|
|
|
* This is a special-purpose list clearing method used in the networking code
|
|
|
|
* for lists allocated as per-cpu, where we don't want to incur the extra
|
|
|
|
* WRITE_ONCE() overhead of a regular list_del_init(). The code that uses this
|
|
|
|
* needs to check the node 'prev' pointer instead of calling list_empty().
|
|
|
|
*/
|
|
|
|
static inline void __list_del_clearprev(struct list_head *entry)
|
|
|
|
{
|
|
|
|
__list_del(entry->prev, entry->next);
|
|
|
|
entry->prev = NULL;
|
|
|
|
}
|
|
|
|
|
2011-02-18 19:32:28 +00:00
|
|
|
static inline void __list_del_entry(struct list_head *entry)
|
|
|
|
{
|
2016-08-17 21:42:10 +00:00
|
|
|
if (!__list_del_entry_valid(entry))
|
|
|
|
return;
|
|
|
|
|
2011-02-18 19:32:28 +00:00
|
|
|
__list_del(entry->prev, entry->next);
|
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* list_del - deletes entry from list.
|
|
|
|
* @entry: the element to delete from the list.
|
|
|
|
* Note: list_empty() on entry does not return true after this, the entry is
|
|
|
|
* in an undefined state.
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
static inline void list_del(struct list_head *entry)
|
|
|
|
{
|
2016-08-17 21:42:10 +00:00
|
|
|
__list_del_entry(entry);
|
2005-04-16 22:20:36 +00:00
|
|
|
entry->next = LIST_POISON1;
|
|
|
|
entry->prev = LIST_POISON2;
|
|
|
|
}
|
|
|
|
|
2006-06-23 09:05:54 +00:00
|
|
|
/**
|
|
|
|
* list_replace - replace old entry by new one
|
|
|
|
* @old : the element to be replaced
|
|
|
|
* @new : the new element to insert
|
2007-02-10 09:45:59 +00:00
|
|
|
*
|
|
|
|
* If @old was empty, it will be overwritten.
|
2006-06-23 09:05:54 +00:00
|
|
|
*/
|
|
|
|
static inline void list_replace(struct list_head *old,
|
|
|
|
struct list_head *new)
|
|
|
|
{
|
|
|
|
new->next = old->next;
|
|
|
|
new->next->prev = new;
|
|
|
|
new->prev = old->prev;
|
|
|
|
new->prev->next = new;
|
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* list_replace_init - replace old entry by new one and initialize the old one
|
|
|
|
* @old : the element to be replaced
|
|
|
|
* @new : the new element to insert
|
|
|
|
*
|
|
|
|
* If @old was empty, it will be overwritten.
|
|
|
|
*/
|
2006-06-23 09:05:54 +00:00
|
|
|
static inline void list_replace_init(struct list_head *old,
|
2019-11-09 18:35:13 +00:00
|
|
|
struct list_head *new)
|
2006-06-23 09:05:54 +00:00
|
|
|
{
|
|
|
|
list_replace(old, new);
|
|
|
|
INIT_LIST_HEAD(old);
|
|
|
|
}
|
|
|
|
|
mm: shuffle initial free memory to improve memory-side-cache utilization
Patch series "mm: Randomize free memory", v10.
This patch (of 3):
Randomization of the page allocator improves the average utilization of
a direct-mapped memory-side-cache. Memory side caching is a platform
capability that Linux has been previously exposed to in HPC
(high-performance computing) environments on specialty platforms. In
that instance it was a smaller pool of high-bandwidth-memory relative to
higher-capacity / lower-bandwidth DRAM. Now, this capability is going
to be found on general purpose server platforms where DRAM is a cache in
front of higher latency persistent memory [1].
Robert offered an explanation of the state of the art of Linux
interactions with memory-side-caches [2], and I copy it here:
It's been a problem in the HPC space:
http://www.nersc.gov/research-and-development/knl-cache-mode-performance-coe/
A kernel module called zonesort is available to try to help:
https://software.intel.com/en-us/articles/xeon-phi-software
and this abandoned patch series proposed that for the kernel:
https://lkml.kernel.org/r/20170823100205.17311-1-lukasz.daniluk@intel.com
Dan's patch series doesn't attempt to ensure buffers won't conflict, but
also reduces the chance that the buffers will. This will make performance
more consistent, albeit slower than "optimal" (which is near impossible
to attain in a general-purpose kernel). That's better than forcing
users to deploy remedies like:
"To eliminate this gradual degradation, we have added a Stream
measurement to the Node Health Check that follows each job;
nodes are rebooted whenever their measured memory bandwidth
falls below 300 GB/s."
A replacement for zonesort was merged upstream in commit cc9aec03e58f
("x86/numa_emulation: Introduce uniform split capability"). With this
numa_emulation capability, memory can be split into cache sized
("near-memory" sized) numa nodes. A bind operation to such a node, and
disabling workloads on other nodes, enables full cache performance.
However, once the workload exceeds the cache size then cache conflicts
are unavoidable. While HPC environments might be able to tolerate
time-scheduling of cache sized workloads, for general purpose server
platforms, the oversubscribed cache case will be the common case.
The worst case scenario is that a server system owner benchmarks a
workload at boot with an un-contended cache only to see that performance
degrade over time, even below the average cache performance due to
excessive conflicts. Randomization clips the peaks and fills in the
valleys of cache utilization to yield steady average performance.
Here are some performance impact details of the patches:
1/ An Intel internal synthetic memory bandwidth measurement tool, saw a
3X speedup in a contrived case that tries to force cache conflicts.
The contrived cased used the numa_emulation capability to force an
instance of the benchmark to be run in two of the near-memory sized
numa nodes. If both instances were placed on the same emulated they
would fit and cause zero conflicts. While on separate emulated nodes
without randomization they underutilized the cache and conflicted
unnecessarily due to the in-order allocation per node.
2/ A well known Java server application benchmark was run with a heap
size that exceeded cache size by 3X. The cache conflict rate was 8%
for the first run and degraded to 21% after page allocator aging. With
randomization enabled the rate levelled out at 11%.
3/ A MongoDB workload did not observe measurable difference in
cache-conflict rates, but the overall throughput dropped by 7% with
randomization in one case.
4/ Mel Gorman ran his suite of performance workloads with randomization
enabled on platforms without a memory-side-cache and saw a mix of some
improvements and some losses [3].
While there is potentially significant improvement for applications that
depend on low latency access across a wide working-set, the performance
may be negligible to negative for other workloads. For this reason the
shuffle capability defaults to off unless a direct-mapped
memory-side-cache is detected. Even then, the page_alloc.shuffle=0
parameter can be specified to disable the randomization on those systems.
Outside of memory-side-cache utilization concerns there is potentially
security benefit from randomization. Some data exfiltration and
return-oriented-programming attacks rely on the ability to infer the
location of sensitive data objects. The kernel page allocator, especially
early in system boot, has predictable first-in-first out behavior for
physical pages. Pages are freed in physical address order when first
onlined.
Quoting Kees:
"While we already have a base-address randomization
(CONFIG_RANDOMIZE_MEMORY), attacks against the same hardware and
memory layouts would certainly be using the predictability of
allocation ordering (i.e. for attacks where the base address isn't
important: only the relative positions between allocated memory).
This is common in lots of heap-style attacks. They try to gain
control over ordering by spraying allocations, etc.
I'd really like to see this because it gives us something similar
to CONFIG_SLAB_FREELIST_RANDOM but for the page allocator."
While SLAB_FREELIST_RANDOM reduces the predictability of some local slab
caches it leaves vast bulk of memory to be predictably in order allocated.
However, it should be noted, the concrete security benefits are hard to
quantify, and no known CVE is mitigated by this randomization.
Introduce shuffle_free_memory(), and its helper shuffle_zone(), to perform
a Fisher-Yates shuffle of the page allocator 'free_area' lists when they
are initially populated with free memory at boot and at hotplug time. Do
this based on either the presence of a page_alloc.shuffle=Y command line
parameter, or autodetection of a memory-side-cache (to be added in a
follow-on patch).
The shuffling is done in terms of CONFIG_SHUFFLE_PAGE_ORDER sized free
pages where the default CONFIG_SHUFFLE_PAGE_ORDER is MAX_ORDER-1 i.e. 10,
4MB this trades off randomization granularity for time spent shuffling.
MAX_ORDER-1 was chosen to be minimally invasive to the page allocator
while still showing memory-side cache behavior improvements, and the
expectation that the security implications of finer granularity
randomization is mitigated by CONFIG_SLAB_FREELIST_RANDOM. The
performance impact of the shuffling appears to be in the noise compared to
other memory initialization work.
This initial randomization can be undone over time so a follow-on patch is
introduced to inject entropy on page free decisions. It is reasonable to
ask if the page free entropy is sufficient, but it is not enough due to
the in-order initial freeing of pages. At the start of that process
putting page1 in front or behind page0 still keeps them close together,
page2 is still near page1 and has a high chance of being adjacent. As
more pages are added ordering diversity improves, but there is still high
page locality for the low address pages and this leads to no significant
impact to the cache conflict rate.
[1]: https://itpeernetwork.intel.com/intel-optane-dc-persistent-memory-operating-modes/
[2]: https://lkml.kernel.org/r/AT5PR8401MB1169D656C8B5E121752FC0F8AB120@AT5PR8401MB1169.NAMPRD84.PROD.OUTLOOK.COM
[3]: https://lkml.org/lkml/2018/10/12/309
[dan.j.williams@intel.com: fix shuffle enable]
Link: http://lkml.kernel.org/r/154943713038.3858443.4125180191382062871.stgit@dwillia2-desk3.amr.corp.intel.com
[cai@lca.pw: fix SHUFFLE_PAGE_ALLOCATOR help texts]
Link: http://lkml.kernel.org/r/20190425201300.75650-1-cai@lca.pw
Link: http://lkml.kernel.org/r/154899811738.3165233.12325692939590944259.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Qian Cai <cai@lca.pw>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Robert Elliott <elliott@hpe.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-05-14 22:41:28 +00:00
|
|
|
/**
|
|
|
|
* list_swap - replace entry1 with entry2 and re-add entry1 at entry2's position
|
|
|
|
* @entry1: the location to place entry2
|
|
|
|
* @entry2: the location to place entry1
|
|
|
|
*/
|
|
|
|
static inline void list_swap(struct list_head *entry1,
|
|
|
|
struct list_head *entry2)
|
|
|
|
{
|
|
|
|
struct list_head *pos = entry2->prev;
|
|
|
|
|
|
|
|
list_del(entry2);
|
|
|
|
list_replace(entry1, entry2);
|
|
|
|
if (pos == entry1)
|
|
|
|
pos = entry2;
|
|
|
|
list_add(entry1, pos);
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
|
|
|
* list_del_init - deletes entry from list and reinitialize it.
|
|
|
|
* @entry: the element to delete from the list.
|
|
|
|
*/
|
|
|
|
static inline void list_del_init(struct list_head *entry)
|
|
|
|
{
|
2011-02-18 19:32:28 +00:00
|
|
|
__list_del_entry(entry);
|
2005-04-16 22:20:36 +00:00
|
|
|
INIT_LIST_HEAD(entry);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_move - delete from one list and add as another's head
|
|
|
|
* @list: the entry to move
|
|
|
|
* @head: the head that will precede our entry
|
|
|
|
*/
|
|
|
|
static inline void list_move(struct list_head *list, struct list_head *head)
|
|
|
|
{
|
2011-02-18 19:32:28 +00:00
|
|
|
__list_del_entry(list);
|
2007-05-12 23:28:35 +00:00
|
|
|
list_add(list, head);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_move_tail - delete from one list and add as another's tail
|
|
|
|
* @list: the entry to move
|
|
|
|
* @head: the head that will follow our entry
|
|
|
|
*/
|
|
|
|
static inline void list_move_tail(struct list_head *list,
|
|
|
|
struct list_head *head)
|
|
|
|
{
|
2011-02-18 19:32:28 +00:00
|
|
|
__list_del_entry(list);
|
2007-05-12 23:28:35 +00:00
|
|
|
list_add_tail(list, head);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2018-09-13 09:17:23 +00:00
|
|
|
/**
|
|
|
|
* list_bulk_move_tail - move a subsection of a list to its tail
|
|
|
|
* @head: the head that will follow our entry
|
|
|
|
* @first: first entry to move
|
|
|
|
* @last: last entry to move, can be the same as first
|
|
|
|
*
|
|
|
|
* Move all entries between @first and including @last before @head.
|
|
|
|
* All three entries must belong to the same linked list.
|
|
|
|
*/
|
|
|
|
static inline void list_bulk_move_tail(struct list_head *head,
|
|
|
|
struct list_head *first,
|
|
|
|
struct list_head *last)
|
|
|
|
{
|
|
|
|
first->prev->next = last->next;
|
|
|
|
last->next->prev = first->prev;
|
|
|
|
|
|
|
|
head->prev->next = first;
|
|
|
|
first->prev = head->prev;
|
|
|
|
|
|
|
|
last->next = head;
|
|
|
|
head->prev = last;
|
|
|
|
}
|
|
|
|
|
2019-03-05 23:44:54 +00:00
|
|
|
/**
|
2019-03-29 03:44:05 +00:00
|
|
|
* list_is_first -- tests whether @list is the first entry in list @head
|
2019-03-05 23:44:54 +00:00
|
|
|
* @list: the entry to test
|
|
|
|
* @head: the head of the list
|
|
|
|
*/
|
2022-01-20 02:08:56 +00:00
|
|
|
static inline int list_is_first(const struct list_head *list, const struct list_head *head)
|
2019-03-05 23:44:54 +00:00
|
|
|
{
|
|
|
|
return list->prev == head;
|
|
|
|
}
|
|
|
|
|
2006-07-14 07:24:35 +00:00
|
|
|
/**
|
|
|
|
* list_is_last - tests whether @list is the last entry in list @head
|
|
|
|
* @list: the entry to test
|
|
|
|
* @head: the head of the list
|
|
|
|
*/
|
2022-01-20 02:08:56 +00:00
|
|
|
static inline int list_is_last(const struct list_head *list, const struct list_head *head)
|
2006-07-14 07:24:35 +00:00
|
|
|
{
|
|
|
|
return list->next == head;
|
|
|
|
}
|
|
|
|
|
2022-01-20 02:08:56 +00:00
|
|
|
/**
|
|
|
|
* list_is_head - tests whether @list is the list @head
|
|
|
|
* @list: the entry to test
|
|
|
|
* @head: the head of the list
|
|
|
|
*/
|
|
|
|
static inline int list_is_head(const struct list_head *list, const struct list_head *head)
|
|
|
|
{
|
|
|
|
return list == head;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
|
|
|
* list_empty - tests whether a list is empty
|
|
|
|
* @head: the list to test.
|
|
|
|
*/
|
|
|
|
static inline int list_empty(const struct list_head *head)
|
|
|
|
{
|
2015-09-21 00:03:16 +00:00
|
|
|
return READ_ONCE(head->next) == head;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2020-07-23 19:33:41 +00:00
|
|
|
/**
|
|
|
|
* list_del_init_careful - deletes entry from list and reinitialize it.
|
|
|
|
* @entry: the element to delete from the list.
|
|
|
|
*
|
|
|
|
* This is the same as list_del_init(), except designed to be used
|
|
|
|
* together with list_empty_careful() in a way to guarantee ordering
|
|
|
|
* of other memory operations.
|
|
|
|
*
|
|
|
|
* Any memory operations done before a list_del_init_careful() are
|
|
|
|
* guaranteed to be visible after a list_empty_careful() test.
|
|
|
|
*/
|
|
|
|
static inline void list_del_init_careful(struct list_head *entry)
|
|
|
|
{
|
|
|
|
__list_del_entry(entry);
|
2022-04-29 21:38:01 +00:00
|
|
|
WRITE_ONCE(entry->prev, entry);
|
2020-07-23 19:33:41 +00:00
|
|
|
smp_store_release(&entry->next, entry);
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
2006-06-25 12:47:42 +00:00
|
|
|
* list_empty_careful - tests whether a list is empty and not being modified
|
|
|
|
* @head: the list to test
|
|
|
|
*
|
|
|
|
* Description:
|
|
|
|
* tests whether a list is empty _and_ checks that no other CPU might be
|
|
|
|
* in the process of modifying either member (next or prev)
|
2005-04-16 22:20:36 +00:00
|
|
|
*
|
|
|
|
* NOTE: using list_empty_careful() without synchronization
|
|
|
|
* can only be safe if the only activity that can happen
|
|
|
|
* to the list entry is list_del_init(). Eg. it cannot be used
|
|
|
|
* if another CPU could re-list_add() it.
|
|
|
|
*/
|
|
|
|
static inline int list_empty_careful(const struct list_head *head)
|
|
|
|
{
|
2020-07-23 19:33:41 +00:00
|
|
|
struct list_head *next = smp_load_acquire(&head->next);
|
2022-04-29 21:38:01 +00:00
|
|
|
return list_is_head(next, head) && (next == READ_ONCE(head->prev));
|
2008-04-28 09:14:27 +00:00
|
|
|
}
|
|
|
|
|
2010-01-09 19:53:14 +00:00
|
|
|
/**
|
|
|
|
* list_rotate_left - rotate the list to the left
|
|
|
|
* @head: the head of the list
|
|
|
|
*/
|
|
|
|
static inline void list_rotate_left(struct list_head *head)
|
|
|
|
{
|
|
|
|
struct list_head *first;
|
|
|
|
|
|
|
|
if (!list_empty(head)) {
|
|
|
|
first = head->next;
|
|
|
|
list_move_tail(first, head);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-05-14 00:15:59 +00:00
|
|
|
/**
|
|
|
|
* list_rotate_to_front() - Rotate list to specific item.
|
|
|
|
* @list: The desired new front of the list.
|
|
|
|
* @head: The head of the list.
|
|
|
|
*
|
|
|
|
* Rotates list so that @list becomes the new front of the list.
|
|
|
|
*/
|
|
|
|
static inline void list_rotate_to_front(struct list_head *list,
|
|
|
|
struct list_head *head)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Deletes the list head from the list denoted by @head and
|
|
|
|
* places it as the tail of @list, this effectively rotates the
|
|
|
|
* list so that @list is at the front.
|
|
|
|
*/
|
|
|
|
list_move_tail(head, list);
|
|
|
|
}
|
|
|
|
|
2008-04-28 09:14:27 +00:00
|
|
|
/**
|
|
|
|
* list_is_singular - tests whether a list has just one entry.
|
|
|
|
* @head: the list to test.
|
|
|
|
*/
|
|
|
|
static inline int list_is_singular(const struct list_head *head)
|
|
|
|
{
|
|
|
|
return !list_empty(head) && (head->next == head->prev);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2008-08-06 20:28:54 +00:00
|
|
|
static inline void __list_cut_position(struct list_head *list,
|
|
|
|
struct list_head *head, struct list_head *entry)
|
|
|
|
{
|
|
|
|
struct list_head *new_first = entry->next;
|
|
|
|
list->next = head->next;
|
|
|
|
list->next->prev = list;
|
|
|
|
list->prev = entry;
|
|
|
|
entry->next = list;
|
|
|
|
head->next = new_first;
|
|
|
|
new_first->prev = head;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_cut_position - cut a list into two
|
|
|
|
* @list: a new list to add all removed entries
|
|
|
|
* @head: a list with entries
|
|
|
|
* @entry: an entry within head, could be the head itself
|
|
|
|
* and if so we won't cut the list
|
|
|
|
*
|
|
|
|
* This helper moves the initial part of @head, up to and
|
|
|
|
* including @entry, from @head to @list. You should
|
|
|
|
* pass on @entry an element you know is on @head. @list
|
|
|
|
* should be an empty list or a list you do not care about
|
|
|
|
* losing its data.
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
static inline void list_cut_position(struct list_head *list,
|
|
|
|
struct list_head *head, struct list_head *entry)
|
|
|
|
{
|
|
|
|
if (list_empty(head))
|
|
|
|
return;
|
2022-01-20 02:08:56 +00:00
|
|
|
if (list_is_singular(head) && !list_is_head(entry, head) && (entry != head->next))
|
2008-08-06 20:28:54 +00:00
|
|
|
return;
|
2022-01-20 02:08:56 +00:00
|
|
|
if (list_is_head(entry, head))
|
2008-08-06 20:28:54 +00:00
|
|
|
INIT_LIST_HEAD(list);
|
|
|
|
else
|
|
|
|
__list_cut_position(list, head, entry);
|
|
|
|
}
|
|
|
|
|
2018-07-02 15:13:40 +00:00
|
|
|
/**
|
|
|
|
* list_cut_before - cut a list into two, before given entry
|
|
|
|
* @list: a new list to add all removed entries
|
|
|
|
* @head: a list with entries
|
|
|
|
* @entry: an entry within head, could be the head itself
|
|
|
|
*
|
|
|
|
* This helper moves the initial part of @head, up to but
|
|
|
|
* excluding @entry, from @head to @list. You should pass
|
|
|
|
* in @entry an element you know is on @head. @list should
|
|
|
|
* be an empty list or a list you do not care about losing
|
|
|
|
* its data.
|
|
|
|
* If @entry == @head, all entries on @head are moved to
|
|
|
|
* @list.
|
|
|
|
*/
|
|
|
|
static inline void list_cut_before(struct list_head *list,
|
|
|
|
struct list_head *head,
|
|
|
|
struct list_head *entry)
|
|
|
|
{
|
|
|
|
if (head->next == entry) {
|
|
|
|
INIT_LIST_HEAD(list);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
list->next = head->next;
|
|
|
|
list->next->prev = list;
|
|
|
|
list->prev = entry->prev;
|
|
|
|
list->prev->next = list;
|
|
|
|
head->next = entry;
|
|
|
|
entry->prev = head;
|
|
|
|
}
|
|
|
|
|
2008-04-29 07:59:29 +00:00
|
|
|
static inline void __list_splice(const struct list_head *list,
|
2008-08-06 22:21:26 +00:00
|
|
|
struct list_head *prev,
|
|
|
|
struct list_head *next)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct list_head *first = list->next;
|
|
|
|
struct list_head *last = list->prev;
|
|
|
|
|
2008-08-06 22:21:26 +00:00
|
|
|
first->prev = prev;
|
|
|
|
prev->next = first;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-08-06 22:21:26 +00:00
|
|
|
last->next = next;
|
|
|
|
next->prev = last;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2008-08-06 22:21:26 +00:00
|
|
|
* list_splice - join two lists, this is designed for stacks
|
2005-04-16 22:20:36 +00:00
|
|
|
* @list: the new list to add.
|
|
|
|
* @head: the place to add it in the first list.
|
|
|
|
*/
|
2008-04-29 07:59:29 +00:00
|
|
|
static inline void list_splice(const struct list_head *list,
|
|
|
|
struct list_head *head)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
if (!list_empty(list))
|
2008-08-06 22:21:26 +00:00
|
|
|
__list_splice(list, head, head->next);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_splice_tail - join two lists, each list being a queue
|
|
|
|
* @list: the new list to add.
|
|
|
|
* @head: the place to add it in the first list.
|
|
|
|
*/
|
|
|
|
static inline void list_splice_tail(struct list_head *list,
|
|
|
|
struct list_head *head)
|
|
|
|
{
|
|
|
|
if (!list_empty(list))
|
|
|
|
__list_splice(list, head->prev, head);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_splice_init - join two lists and reinitialise the emptied list.
|
|
|
|
* @list: the new list to add.
|
|
|
|
* @head: the place to add it in the first list.
|
|
|
|
*
|
|
|
|
* The list at @list is reinitialised
|
|
|
|
*/
|
|
|
|
static inline void list_splice_init(struct list_head *list,
|
|
|
|
struct list_head *head)
|
|
|
|
{
|
|
|
|
if (!list_empty(list)) {
|
2008-08-06 22:21:26 +00:00
|
|
|
__list_splice(list, head, head->next);
|
|
|
|
INIT_LIST_HEAD(list);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2008-08-08 20:56:20 +00:00
|
|
|
* list_splice_tail_init - join two lists and reinitialise the emptied list
|
2008-08-06 22:21:26 +00:00
|
|
|
* @list: the new list to add.
|
|
|
|
* @head: the place to add it in the first list.
|
|
|
|
*
|
2008-08-08 20:56:20 +00:00
|
|
|
* Each of the lists is a queue.
|
2008-08-06 22:21:26 +00:00
|
|
|
* The list at @list is reinitialised
|
|
|
|
*/
|
|
|
|
static inline void list_splice_tail_init(struct list_head *list,
|
|
|
|
struct list_head *head)
|
|
|
|
{
|
|
|
|
if (!list_empty(list)) {
|
|
|
|
__list_splice(list, head->prev, head);
|
2005-04-16 22:20:36 +00:00
|
|
|
INIT_LIST_HEAD(list);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* list_entry - get the struct for this entry
|
|
|
|
* @ptr: the &struct list_head pointer.
|
|
|
|
* @type: the type of the struct this is embedded in.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
#define list_entry(ptr, type, member) \
|
|
|
|
container_of(ptr, type, member)
|
|
|
|
|
Introduce a handy list_first_entry macro
There are many places in the kernel where the construction like
foo = list_entry(head->next, struct foo_struct, list);
are used.
The code might look more descriptive and neat if using the macro
list_first_entry(head, type, member) \
list_entry((head)->next, type, member)
Here is the macro itself and the examples of its usage in the generic code.
If it will turn out to be useful, I can prepare the set of patches to
inject in into arch-specific code, drivers, networking, etc.
Signed-off-by: Pavel Emelianov <xemul@openvz.org>
Signed-off-by: Kirill Korotaev <dev@openvz.org>
Cc: Randy Dunlap <randy.dunlap@oracle.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Zach Brown <zach.brown@oracle.com>
Cc: Davide Libenzi <davidel@xmailserver.org>
Cc: John McCutchan <ttb@tentacle.dhs.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: john stultz <johnstul@us.ibm.com>
Cc: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-08 07:30:19 +00:00
|
|
|
/**
|
|
|
|
* list_first_entry - get the first element from a list
|
|
|
|
* @ptr: the list head to take the element from.
|
|
|
|
* @type: the type of the struct this is embedded in.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
Introduce a handy list_first_entry macro
There are many places in the kernel where the construction like
foo = list_entry(head->next, struct foo_struct, list);
are used.
The code might look more descriptive and neat if using the macro
list_first_entry(head, type, member) \
list_entry((head)->next, type, member)
Here is the macro itself and the examples of its usage in the generic code.
If it will turn out to be useful, I can prepare the set of patches to
inject in into arch-specific code, drivers, networking, etc.
Signed-off-by: Pavel Emelianov <xemul@openvz.org>
Signed-off-by: Kirill Korotaev <dev@openvz.org>
Cc: Randy Dunlap <randy.dunlap@oracle.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Zach Brown <zach.brown@oracle.com>
Cc: Davide Libenzi <davidel@xmailserver.org>
Cc: John McCutchan <ttb@tentacle.dhs.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: john stultz <johnstul@us.ibm.com>
Cc: Ram Pai <linuxram@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-08 07:30:19 +00:00
|
|
|
*
|
|
|
|
* Note, that list is expected to be not empty.
|
|
|
|
*/
|
|
|
|
#define list_first_entry(ptr, type, member) \
|
|
|
|
list_entry((ptr)->next, type, member)
|
|
|
|
|
2013-11-12 23:10:03 +00:00
|
|
|
/**
|
|
|
|
* list_last_entry - get the last element from a list
|
|
|
|
* @ptr: the list head to take the element from.
|
|
|
|
* @type: the type of the struct this is embedded in.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2013-11-12 23:10:03 +00:00
|
|
|
*
|
|
|
|
* Note, that list is expected to be not empty.
|
|
|
|
*/
|
|
|
|
#define list_last_entry(ptr, type, member) \
|
|
|
|
list_entry((ptr)->prev, type, member)
|
|
|
|
|
2013-05-29 05:02:56 +00:00
|
|
|
/**
|
|
|
|
* list_first_entry_or_null - get the first element from a list
|
|
|
|
* @ptr: the list head to take the element from.
|
|
|
|
* @type: the type of the struct this is embedded in.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2013-05-29 05:02:56 +00:00
|
|
|
*
|
|
|
|
* Note that if the list is empty, it returns NULL.
|
|
|
|
*/
|
2016-07-23 18:27:50 +00:00
|
|
|
#define list_first_entry_or_null(ptr, type, member) ({ \
|
|
|
|
struct list_head *head__ = (ptr); \
|
|
|
|
struct list_head *pos__ = READ_ONCE(head__->next); \
|
|
|
|
pos__ != head__ ? list_entry(pos__, type, member) : NULL; \
|
|
|
|
})
|
2013-05-29 05:02:56 +00:00
|
|
|
|
2013-11-12 23:10:01 +00:00
|
|
|
/**
|
|
|
|
* list_next_entry - get the next element in list
|
|
|
|
* @pos: the type * to cursor
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2013-11-12 23:10:01 +00:00
|
|
|
*/
|
|
|
|
#define list_next_entry(pos, member) \
|
|
|
|
list_entry((pos)->member.next, typeof(*(pos)), member)
|
|
|
|
|
2022-05-06 18:12:57 +00:00
|
|
|
/**
|
|
|
|
* list_next_entry_circular - get the next element in list
|
|
|
|
* @pos: the type * to cursor.
|
|
|
|
* @head: the list head to take the element from.
|
|
|
|
* @member: the name of the list_head within the struct.
|
|
|
|
*
|
|
|
|
* Wraparound if pos is the last element (return the first element).
|
|
|
|
* Note, that list is expected to be not empty.
|
|
|
|
*/
|
|
|
|
#define list_next_entry_circular(pos, head, member) \
|
|
|
|
(list_is_last(&(pos)->member, head) ? \
|
|
|
|
list_first_entry(head, typeof(*(pos)), member) : list_next_entry(pos, member))
|
|
|
|
|
2013-11-12 23:10:01 +00:00
|
|
|
/**
|
|
|
|
* list_prev_entry - get the prev element in list
|
|
|
|
* @pos: the type * to cursor
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2013-11-12 23:10:01 +00:00
|
|
|
*/
|
|
|
|
#define list_prev_entry(pos, member) \
|
|
|
|
list_entry((pos)->member.prev, typeof(*(pos)), member)
|
|
|
|
|
2022-05-06 18:12:57 +00:00
|
|
|
/**
|
|
|
|
* list_prev_entry_circular - get the prev element in list
|
|
|
|
* @pos: the type * to cursor.
|
|
|
|
* @head: the list head to take the element from.
|
|
|
|
* @member: the name of the list_head within the struct.
|
|
|
|
*
|
|
|
|
* Wraparound if pos is the first element (return the last element).
|
|
|
|
* Note, that list is expected to be not empty.
|
|
|
|
*/
|
|
|
|
#define list_prev_entry_circular(pos, head, member) \
|
|
|
|
(list_is_first(&(pos)->member, head) ? \
|
|
|
|
list_last_entry(head, typeof(*(pos)), member) : list_prev_entry(pos, member))
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
|
|
|
* list_for_each - iterate over a list
|
2006-06-25 12:47:43 +00:00
|
|
|
* @pos: the &struct list_head to use as a loop cursor.
|
2005-04-16 22:20:36 +00:00
|
|
|
* @head: the head for your list.
|
|
|
|
*/
|
|
|
|
#define list_for_each(pos, head) \
|
2022-01-20 02:08:56 +00:00
|
|
|
for (pos = (head)->next; !list_is_head(pos, (head)); pos = pos->next)
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2022-05-21 07:45:28 +00:00
|
|
|
/**
|
|
|
|
* list_for_each_rcu - Iterate over a list in an RCU-safe fashion
|
|
|
|
* @pos: the &struct list_head to use as a loop cursor.
|
|
|
|
* @head: the head for your list.
|
|
|
|
*/
|
|
|
|
#define list_for_each_rcu(pos, head) \
|
|
|
|
for (pos = rcu_dereference((head)->next); \
|
|
|
|
!list_is_head(pos, (head)); \
|
|
|
|
pos = rcu_dereference(pos->next))
|
|
|
|
|
2019-11-28 21:11:54 +00:00
|
|
|
/**
|
|
|
|
* list_for_each_continue - continue iteration over a list
|
|
|
|
* @pos: the &struct list_head to use as a loop cursor.
|
|
|
|
* @head: the head for your list.
|
|
|
|
*
|
|
|
|
* Continue to iterate over a list, continuing after the current position.
|
|
|
|
*/
|
|
|
|
#define list_for_each_continue(pos, head) \
|
2022-01-20 02:08:56 +00:00
|
|
|
for (pos = pos->next; !list_is_head(pos, (head)); pos = pos->next)
|
2019-11-28 21:11:54 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
|
|
|
* list_for_each_prev - iterate over a list backwards
|
2006-06-25 12:47:43 +00:00
|
|
|
* @pos: the &struct list_head to use as a loop cursor.
|
2005-04-16 22:20:36 +00:00
|
|
|
* @head: the head for your list.
|
|
|
|
*/
|
|
|
|
#define list_for_each_prev(pos, head) \
|
2022-01-20 02:08:56 +00:00
|
|
|
for (pos = (head)->prev; !list_is_head(pos, (head)); pos = pos->prev)
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/**
|
2006-06-25 12:47:42 +00:00
|
|
|
* list_for_each_safe - iterate over a list safe against removal of list entry
|
2006-06-25 12:47:43 +00:00
|
|
|
* @pos: the &struct list_head to use as a loop cursor.
|
2005-04-16 22:20:36 +00:00
|
|
|
* @n: another &struct list_head to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
|
|
|
*/
|
|
|
|
#define list_for_each_safe(pos, n, head) \
|
2022-01-20 02:08:56 +00:00
|
|
|
for (pos = (head)->next, n = pos->next; \
|
|
|
|
!list_is_head(pos, (head)); \
|
|
|
|
pos = n, n = pos->next)
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2007-10-17 06:29:53 +00:00
|
|
|
/**
|
2007-10-19 06:39:28 +00:00
|
|
|
* list_for_each_prev_safe - iterate over a list backwards safe against removal of list entry
|
2007-10-17 06:29:53 +00:00
|
|
|
* @pos: the &struct list_head to use as a loop cursor.
|
|
|
|
* @n: another &struct list_head to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
|
|
|
*/
|
|
|
|
#define list_for_each_prev_safe(pos, n, head) \
|
|
|
|
for (pos = (head)->prev, n = pos->prev; \
|
2022-01-20 02:08:56 +00:00
|
|
|
!list_is_head(pos, (head)); \
|
2007-10-17 06:29:53 +00:00
|
|
|
pos = n, n = pos->prev)
|
|
|
|
|
2022-11-30 13:48:35 +00:00
|
|
|
/**
|
|
|
|
* list_count_nodes - count nodes in the list
|
|
|
|
* @head: the head for your list.
|
|
|
|
*/
|
|
|
|
static inline size_t list_count_nodes(struct list_head *head)
|
|
|
|
{
|
|
|
|
struct list_head *pos;
|
|
|
|
size_t count = 0;
|
|
|
|
|
|
|
|
list_for_each(pos, head)
|
|
|
|
count++;
|
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2020-10-16 03:11:31 +00:00
|
|
|
/**
|
|
|
|
* list_entry_is_head - test if the entry points to the head of the list
|
|
|
|
* @pos: the type * to cursor
|
|
|
|
* @head: the head for your list.
|
|
|
|
* @member: the name of the list_head within the struct.
|
|
|
|
*/
|
|
|
|
#define list_entry_is_head(pos, head, member) \
|
2024-02-08 02:14:23 +00:00
|
|
|
list_is_head(&pos->member, (head))
|
2020-10-16 03:11:31 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
|
|
|
* list_for_each_entry - iterate over list of given type
|
2006-06-25 12:47:43 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-16 22:20:36 +00:00
|
|
|
* @head: the head for your list.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry(pos, head, member) \
|
2013-11-12 23:10:03 +00:00
|
|
|
for (pos = list_first_entry(head, typeof(*pos), member); \
|
2020-10-16 03:11:31 +00:00
|
|
|
!list_entry_is_head(pos, head, member); \
|
2013-11-12 23:10:02 +00:00
|
|
|
pos = list_next_entry(pos, member))
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* list_for_each_entry_reverse - iterate backwards over list of given type.
|
2006-06-25 12:47:43 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-16 22:20:36 +00:00
|
|
|
* @head: the head for your list.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_reverse(pos, head, member) \
|
2013-11-12 23:10:03 +00:00
|
|
|
for (pos = list_last_entry(head, typeof(*pos), member); \
|
2020-10-16 03:11:31 +00:00
|
|
|
!list_entry_is_head(pos, head, member); \
|
2013-11-12 23:10:02 +00:00
|
|
|
pos = list_prev_entry(pos, member))
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/**
|
2007-02-10 09:45:59 +00:00
|
|
|
* list_prepare_entry - prepare a pos entry for use in list_for_each_entry_continue()
|
2005-04-16 22:20:36 +00:00
|
|
|
* @pos: the type * to use as a start point
|
|
|
|
* @head: the head of the list
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 12:47:42 +00:00
|
|
|
*
|
2007-02-10 09:45:59 +00:00
|
|
|
* Prepares a pos entry for use as a start point in list_for_each_entry_continue().
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
#define list_prepare_entry(pos, head, member) \
|
|
|
|
((pos) ? : list_entry(head, typeof(*pos), member))
|
|
|
|
|
|
|
|
/**
|
2006-06-25 12:47:42 +00:00
|
|
|
* list_for_each_entry_continue - continue iteration over list of given type
|
2006-06-25 12:47:43 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-16 22:20:36 +00:00
|
|
|
* @head: the head for your list.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 12:47:42 +00:00
|
|
|
*
|
|
|
|
* Continue to iterate over list of given type, continuing after
|
|
|
|
* the current position.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_continue(pos, head, member) \
|
2013-11-12 23:10:02 +00:00
|
|
|
for (pos = list_next_entry(pos, member); \
|
2020-10-16 03:11:31 +00:00
|
|
|
!list_entry_is_head(pos, head, member); \
|
2013-11-12 23:10:02 +00:00
|
|
|
pos = list_next_entry(pos, member))
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2007-09-18 20:20:41 +00:00
|
|
|
/**
|
|
|
|
* list_for_each_entry_continue_reverse - iterate backwards from the given point
|
|
|
|
* @pos: the type * to use as a loop cursor.
|
|
|
|
* @head: the head for your list.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2007-09-18 20:20:41 +00:00
|
|
|
*
|
|
|
|
* Start to iterate over list of given type backwards, continuing after
|
|
|
|
* the current position.
|
|
|
|
*/
|
|
|
|
#define list_for_each_entry_continue_reverse(pos, head, member) \
|
2013-11-12 23:10:02 +00:00
|
|
|
for (pos = list_prev_entry(pos, member); \
|
2020-10-16 03:11:31 +00:00
|
|
|
!list_entry_is_head(pos, head, member); \
|
2013-11-12 23:10:02 +00:00
|
|
|
pos = list_prev_entry(pos, member))
|
2007-09-18 20:20:41 +00:00
|
|
|
|
2006-03-21 01:19:17 +00:00
|
|
|
/**
|
2006-06-25 12:47:42 +00:00
|
|
|
* list_for_each_entry_from - iterate over list of given type from the current point
|
2006-06-25 12:47:43 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2006-03-21 01:19:17 +00:00
|
|
|
* @head: the head for your list.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 12:47:42 +00:00
|
|
|
*
|
|
|
|
* Iterate over list of given type, continuing from current position.
|
2006-03-21 01:19:17 +00:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_from(pos, head, member) \
|
2020-10-16 03:11:31 +00:00
|
|
|
for (; !list_entry_is_head(pos, head, member); \
|
2013-11-12 23:10:02 +00:00
|
|
|
pos = list_next_entry(pos, member))
|
2006-03-21 01:19:17 +00:00
|
|
|
|
2017-02-03 09:29:05 +00:00
|
|
|
/**
|
|
|
|
* list_for_each_entry_from_reverse - iterate backwards over list of given type
|
|
|
|
* from the current point
|
|
|
|
* @pos: the type * to use as a loop cursor.
|
|
|
|
* @head: the head for your list.
|
|
|
|
* @member: the name of the list_head within the struct.
|
|
|
|
*
|
|
|
|
* Iterate backwards over list of given type, continuing from current position.
|
|
|
|
*/
|
|
|
|
#define list_for_each_entry_from_reverse(pos, head, member) \
|
2020-10-16 03:11:31 +00:00
|
|
|
for (; !list_entry_is_head(pos, head, member); \
|
2017-02-03 09:29:05 +00:00
|
|
|
pos = list_prev_entry(pos, member))
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
|
|
|
* list_for_each_entry_safe - iterate over list of given type safe against removal of list entry
|
2006-06-25 12:47:43 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-16 22:20:36 +00:00
|
|
|
* @n: another type * to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_safe(pos, n, head, member) \
|
2013-11-12 23:10:03 +00:00
|
|
|
for (pos = list_first_entry(head, typeof(*pos), member), \
|
2013-11-12 23:10:02 +00:00
|
|
|
n = list_next_entry(pos, member); \
|
2020-10-16 03:11:31 +00:00
|
|
|
!list_entry_is_head(pos, head, member); \
|
2013-11-12 23:10:02 +00:00
|
|
|
pos = n, n = list_next_entry(n, member))
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-08-10 03:15:51 +00:00
|
|
|
/**
|
2010-03-05 21:43:17 +00:00
|
|
|
* list_for_each_entry_safe_continue - continue list iteration safe against removal
|
2006-06-25 12:47:43 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-08-10 03:15:51 +00:00
|
|
|
* @n: another type * to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 12:47:42 +00:00
|
|
|
*
|
|
|
|
* Iterate over list of given type, continuing after current point,
|
|
|
|
* safe against removal of list entry.
|
2005-08-10 03:15:51 +00:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_safe_continue(pos, n, head, member) \
|
2013-11-12 23:10:02 +00:00
|
|
|
for (pos = list_next_entry(pos, member), \
|
|
|
|
n = list_next_entry(pos, member); \
|
2020-10-16 03:11:31 +00:00
|
|
|
!list_entry_is_head(pos, head, member); \
|
2013-11-12 23:10:02 +00:00
|
|
|
pos = n, n = list_next_entry(n, member))
|
2006-03-21 01:18:05 +00:00
|
|
|
|
|
|
|
/**
|
2010-03-05 21:43:17 +00:00
|
|
|
* list_for_each_entry_safe_from - iterate over list from current point safe against removal
|
2006-06-25 12:47:43 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2006-03-21 01:18:05 +00:00
|
|
|
* @n: another type * to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 12:47:42 +00:00
|
|
|
*
|
|
|
|
* Iterate over list of given type from current point, safe against
|
|
|
|
* removal of list entry.
|
2006-03-21 01:18:05 +00:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_safe_from(pos, n, head, member) \
|
2013-11-12 23:10:02 +00:00
|
|
|
for (n = list_next_entry(pos, member); \
|
2020-10-16 03:11:31 +00:00
|
|
|
!list_entry_is_head(pos, head, member); \
|
2013-11-12 23:10:02 +00:00
|
|
|
pos = n, n = list_next_entry(n, member))
|
2005-08-10 03:15:51 +00:00
|
|
|
|
2006-01-10 04:51:31 +00:00
|
|
|
/**
|
2010-03-05 21:43:17 +00:00
|
|
|
* list_for_each_entry_safe_reverse - iterate backwards over list safe against removal
|
2006-06-25 12:47:43 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2006-01-10 04:51:31 +00:00
|
|
|
* @n: another type * to use as temporary storage
|
|
|
|
* @head: the head for your list.
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2006-06-25 12:47:42 +00:00
|
|
|
*
|
|
|
|
* Iterate backwards over list of given type, safe against removal
|
|
|
|
* of list entry.
|
2006-01-10 04:51:31 +00:00
|
|
|
*/
|
|
|
|
#define list_for_each_entry_safe_reverse(pos, n, head, member) \
|
2013-11-12 23:10:03 +00:00
|
|
|
for (pos = list_last_entry(head, typeof(*pos), member), \
|
2013-11-12 23:10:02 +00:00
|
|
|
n = list_prev_entry(pos, member); \
|
2020-10-16 03:11:31 +00:00
|
|
|
!list_entry_is_head(pos, head, member); \
|
2013-11-12 23:10:02 +00:00
|
|
|
pos = n, n = list_prev_entry(n, member))
|
2006-01-10 04:51:31 +00:00
|
|
|
|
2010-06-24 03:02:14 +00:00
|
|
|
/**
|
|
|
|
* list_safe_reset_next - reset a stale list_for_each_entry_safe loop
|
|
|
|
* @pos: the loop cursor used in the list_for_each_entry_safe loop
|
|
|
|
* @n: temporary storage used in list_for_each_entry_safe
|
2014-11-14 01:09:55 +00:00
|
|
|
* @member: the name of the list_head within the struct.
|
2010-06-24 03:02:14 +00:00
|
|
|
*
|
|
|
|
* list_safe_reset_next is not safe to use in general if the list may be
|
|
|
|
* modified concurrently (eg. the lock is dropped in the loop body). An
|
|
|
|
* exception to this is if the cursor element (pos) is pinned in the list,
|
|
|
|
* and list_safe_reset_next is called after re-taking the lock and before
|
|
|
|
* completing the current iteration of the loop body.
|
|
|
|
*/
|
|
|
|
#define list_safe_reset_next(pos, n, member) \
|
2013-11-12 23:10:02 +00:00
|
|
|
n = list_next_entry(pos, member)
|
2010-06-24 03:02:14 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Double linked lists with a single pointer list head.
|
|
|
|
* Mostly useful for hash tables where the two pointer list head is
|
|
|
|
* too wasteful.
|
|
|
|
* You lose the ability to access the tail in O(1).
|
|
|
|
*/
|
|
|
|
|
|
|
|
#define HLIST_HEAD_INIT { .first = NULL }
|
|
|
|
#define HLIST_HEAD(name) struct hlist_head name = { .first = NULL }
|
|
|
|
#define INIT_HLIST_HEAD(ptr) ((ptr)->first = NULL)
|
2006-02-03 11:03:56 +00:00
|
|
|
static inline void INIT_HLIST_NODE(struct hlist_node *h)
|
|
|
|
{
|
|
|
|
h->next = NULL;
|
|
|
|
h->pprev = NULL;
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* hlist_unhashed - Has node been removed from list and reinitialized?
|
|
|
|
* @h: Node to be checked
|
|
|
|
*
|
|
|
|
* Not that not all removal functions will leave a node in unhashed
|
|
|
|
* state. For example, hlist_nulls_del_init_rcu() does leave the
|
|
|
|
* node in unhashed state, but hlist_nulls_del() does not.
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
static inline int hlist_unhashed(const struct hlist_node *h)
|
|
|
|
{
|
|
|
|
return !h->pprev;
|
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* hlist_unhashed_lockless - Version of hlist_unhashed for lockless use
|
|
|
|
* @h: Node to be checked
|
|
|
|
*
|
|
|
|
* This variant of hlist_unhashed() must be used in lockless contexts
|
|
|
|
* to avoid potential load-tearing. The READ_ONCE() is paired with the
|
|
|
|
* various WRITE_ONCE() in hlist helpers that are defined below.
|
2019-11-07 19:37:37 +00:00
|
|
|
*/
|
|
|
|
static inline int hlist_unhashed_lockless(const struct hlist_node *h)
|
|
|
|
{
|
|
|
|
return !READ_ONCE(h->pprev);
|
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* hlist_empty - Is the specified hlist_head structure an empty hlist?
|
|
|
|
* @h: Structure to check.
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
static inline int hlist_empty(const struct hlist_head *h)
|
|
|
|
{
|
2015-09-21 00:03:16 +00:00
|
|
|
return !READ_ONCE(h->first);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline void __hlist_del(struct hlist_node *n)
|
|
|
|
{
|
|
|
|
struct hlist_node *next = n->next;
|
|
|
|
struct hlist_node **pprev = n->pprev;
|
2015-09-18 15:45:22 +00:00
|
|
|
|
|
|
|
WRITE_ONCE(*pprev, next);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (next)
|
2019-11-07 19:37:37 +00:00
|
|
|
WRITE_ONCE(next->pprev, pprev);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* hlist_del - Delete the specified hlist_node from its list
|
|
|
|
* @n: Node to delete.
|
|
|
|
*
|
|
|
|
* Note that this function leaves the node in hashed state. Use
|
|
|
|
* hlist_del_init() or similar instead to unhash @n.
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
static inline void hlist_del(struct hlist_node *n)
|
|
|
|
{
|
|
|
|
__hlist_del(n);
|
|
|
|
n->next = LIST_POISON1;
|
|
|
|
n->pprev = LIST_POISON2;
|
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* hlist_del_init - Delete the specified hlist_node from its list and initialize
|
|
|
|
* @n: Node to delete.
|
|
|
|
*
|
|
|
|
* Note that this function leaves the node in unhashed state.
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
static inline void hlist_del_init(struct hlist_node *n)
|
|
|
|
{
|
2006-04-28 22:21:23 +00:00
|
|
|
if (!hlist_unhashed(n)) {
|
2005-04-16 22:20:36 +00:00
|
|
|
__hlist_del(n);
|
|
|
|
INIT_HLIST_NODE(n);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* hlist_add_head - add a new entry at the beginning of the hlist
|
|
|
|
* @n: new entry to be added
|
|
|
|
* @h: hlist head to add it after
|
|
|
|
*
|
|
|
|
* Insert a new entry after the specified head.
|
|
|
|
* This is good for implementing stacks.
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
static inline void hlist_add_head(struct hlist_node *n, struct hlist_head *h)
|
|
|
|
{
|
|
|
|
struct hlist_node *first = h->first;
|
2019-11-07 19:37:37 +00:00
|
|
|
WRITE_ONCE(n->next, first);
|
2005-04-16 22:20:36 +00:00
|
|
|
if (first)
|
2019-11-07 19:37:37 +00:00
|
|
|
WRITE_ONCE(first->pprev, &n->next);
|
2015-09-21 05:02:17 +00:00
|
|
|
WRITE_ONCE(h->first, n);
|
2019-11-07 19:37:37 +00:00
|
|
|
WRITE_ONCE(n->pprev, &h->first);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* hlist_add_before - add a new entry before the one specified
|
|
|
|
* @n: new entry to be added
|
|
|
|
* @next: hlist node to add it before, which must be non-NULL
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
static inline void hlist_add_before(struct hlist_node *n,
|
2019-11-09 18:35:13 +00:00
|
|
|
struct hlist_node *next)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2019-11-07 19:37:37 +00:00
|
|
|
WRITE_ONCE(n->pprev, next->pprev);
|
|
|
|
WRITE_ONCE(n->next, next);
|
|
|
|
WRITE_ONCE(next->pprev, &n->next);
|
2015-09-21 05:02:17 +00:00
|
|
|
WRITE_ONCE(*(n->pprev), n);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
2020-11-16 10:18:16 +00:00
|
|
|
* hlist_add_behind - add a new entry after the one specified
|
2019-11-09 18:35:13 +00:00
|
|
|
* @n: new entry to be added
|
|
|
|
* @prev: hlist node to add it after, which must be non-NULL
|
|
|
|
*/
|
2014-08-06 23:09:16 +00:00
|
|
|
static inline void hlist_add_behind(struct hlist_node *n,
|
|
|
|
struct hlist_node *prev)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2019-11-07 19:37:37 +00:00
|
|
|
WRITE_ONCE(n->next, prev->next);
|
|
|
|
WRITE_ONCE(prev->next, n);
|
|
|
|
WRITE_ONCE(n->pprev, &prev->next);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2014-08-06 23:09:14 +00:00
|
|
|
if (n->next)
|
2019-11-07 19:37:37 +00:00
|
|
|
WRITE_ONCE(n->next->pprev, &n->next);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* hlist_add_fake - create a fake hlist consisting of a single headless node
|
|
|
|
* @n: Node to make a fake list out of
|
|
|
|
*
|
|
|
|
* This makes @n appear to be its own predecessor on a headless hlist.
|
|
|
|
* The point of this is to allow things like hlist_del() to work correctly
|
|
|
|
* in cases where there is no list.
|
|
|
|
*/
|
2010-10-23 19:23:40 +00:00
|
|
|
static inline void hlist_add_fake(struct hlist_node *n)
|
|
|
|
{
|
|
|
|
n->pprev = &n->next;
|
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* hlist_fake: Is this node a fake hlist?
|
|
|
|
* @h: Node to check for being a self-referential fake hlist.
|
|
|
|
*/
|
2015-03-12 12:19:11 +00:00
|
|
|
static inline bool hlist_fake(struct hlist_node *h)
|
|
|
|
{
|
|
|
|
return h->pprev == &h->next;
|
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* hlist_is_singular_node - is node the only element of the specified hlist?
|
|
|
|
* @n: Node to check for singularity.
|
|
|
|
* @h: Header for potentially singular list.
|
|
|
|
*
|
2016-07-04 09:50:27 +00:00
|
|
|
* Check whether the node is the only node of the head without
|
2019-11-09 18:35:13 +00:00
|
|
|
* accessing head, thus avoiding unnecessary cache misses.
|
2016-07-04 09:50:27 +00:00
|
|
|
*/
|
|
|
|
static inline bool
|
|
|
|
hlist_is_singular_node(struct hlist_node *n, struct hlist_head *h)
|
|
|
|
{
|
|
|
|
return !n->next && n->pprev == &h->first;
|
|
|
|
}
|
|
|
|
|
2019-11-09 18:35:13 +00:00
|
|
|
/**
|
|
|
|
* hlist_move_list - Move an hlist
|
|
|
|
* @old: hlist_head for old list.
|
|
|
|
* @new: hlist_head for new list.
|
|
|
|
*
|
2008-08-31 21:39:21 +00:00
|
|
|
* Move a list from one list head to another. Fixup the pprev
|
|
|
|
* reference of the first entry if it exists.
|
|
|
|
*/
|
|
|
|
static inline void hlist_move_list(struct hlist_head *old,
|
|
|
|
struct hlist_head *new)
|
|
|
|
{
|
|
|
|
new->first = old->first;
|
|
|
|
if (new->first)
|
|
|
|
new->first->pprev = &new->first;
|
|
|
|
old->first = NULL;
|
|
|
|
}
|
|
|
|
|
2023-11-26 23:07:30 +00:00
|
|
|
/**
|
|
|
|
* hlist_splice_init() - move all entries from one list to another
|
|
|
|
* @from: hlist_head from which entries will be moved
|
|
|
|
* @last: last entry on the @from list
|
|
|
|
* @to: hlist_head to which entries will be moved
|
|
|
|
*
|
|
|
|
* @to can be empty, @from must contain at least @last.
|
|
|
|
*/
|
|
|
|
static inline void hlist_splice_init(struct hlist_head *from,
|
|
|
|
struct hlist_node *last,
|
|
|
|
struct hlist_head *to)
|
|
|
|
{
|
|
|
|
if (to->first)
|
|
|
|
to->first->pprev = &last->next;
|
|
|
|
last->next = to->first;
|
|
|
|
to->first = from->first;
|
|
|
|
from->first->pprev = &to->first;
|
|
|
|
from->first = NULL;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#define hlist_entry(ptr, type, member) container_of(ptr,type,member)
|
|
|
|
|
|
|
|
#define hlist_for_each(pos, head) \
|
2011-05-19 20:50:07 +00:00
|
|
|
for (pos = (head)->first; pos ; pos = pos->next)
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
#define hlist_for_each_safe(pos, n, head) \
|
|
|
|
for (pos = (head)->first; pos && ({ n = pos->next; 1; }); \
|
|
|
|
pos = n)
|
|
|
|
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 01:06:00 +00:00
|
|
|
#define hlist_entry_safe(ptr, type, member) \
|
list: Fix double fetch of pointer in hlist_entry_safe()
The current version of hlist_entry_safe() fetches the pointer twice,
once to test for NULL and the other to compute the offset back to the
enclosing structure. This is OK for normal lock-based use because in
that case, the pointer cannot change. However, when the pointer is
protected by RCU (as in "rcu_dereference(p)"), then the pointer can
change at any time. This use case can result in the following sequence
of events:
1. CPU 0 invokes hlist_entry_safe(), fetches the RCU-protected
pointer as sees that it is non-NULL.
2. CPU 1 invokes hlist_del_rcu(), deleting the entry that CPU 0
just fetched a pointer to. Because this is the last entry
in the list, the pointer fetched by CPU 0 is now NULL.
3. CPU 0 refetches the pointer, obtains NULL, and then gets a
NULL-pointer crash.
This commit therefore applies gcc's "({ })" statement expression to
create a temporary variable so that the specified pointer is fetched
only once, avoiding the above sequence of events. Please note that
it is the caller's responsibility to use rcu_dereference() as needed.
This allows RCU-protected uses to work correctly without imposing
any additional overhead on the non-RCU case.
Many thanks to Eric Dumazet for spotting root cause!
Reported-by: CAI Qian <caiqian@redhat.com>
Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Li Zefan <lizefan@huawei.com>
2013-03-09 15:38:41 +00:00
|
|
|
({ typeof(ptr) ____ptr = (ptr); \
|
|
|
|
____ptr ? hlist_entry(____ptr, type, member) : NULL; \
|
|
|
|
})
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 01:06:00 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
/**
|
|
|
|
* hlist_for_each_entry - iterate over list of given type
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 01:06:00 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-16 22:20:36 +00:00
|
|
|
* @head: the head for your list.
|
|
|
|
* @member: the name of the hlist_node within the struct.
|
|
|
|
*/
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 01:06:00 +00:00
|
|
|
#define hlist_for_each_entry(pos, head, member) \
|
|
|
|
for (pos = hlist_entry_safe((head)->first, typeof(*(pos)), member);\
|
|
|
|
pos; \
|
|
|
|
pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member))
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/**
|
2006-06-25 12:47:42 +00:00
|
|
|
* hlist_for_each_entry_continue - iterate over a hlist continuing after current point
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 01:06:00 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-16 22:20:36 +00:00
|
|
|
* @member: the name of the hlist_node within the struct.
|
|
|
|
*/
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 01:06:00 +00:00
|
|
|
#define hlist_for_each_entry_continue(pos, member) \
|
|
|
|
for (pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member);\
|
|
|
|
pos; \
|
|
|
|
pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member))
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/**
|
2006-06-25 12:47:42 +00:00
|
|
|
* hlist_for_each_entry_from - iterate over a hlist continuing from current point
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 01:06:00 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2005-04-16 22:20:36 +00:00
|
|
|
* @member: the name of the hlist_node within the struct.
|
|
|
|
*/
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 01:06:00 +00:00
|
|
|
#define hlist_for_each_entry_from(pos, member) \
|
|
|
|
for (; pos; \
|
|
|
|
pos = hlist_entry_safe((pos)->member.next, typeof(*(pos)), member))
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* hlist_for_each_entry_safe - iterate over list of given type safe against removal of list entry
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 01:06:00 +00:00
|
|
|
* @pos: the type * to use as a loop cursor.
|
2019-03-21 03:54:22 +00:00
|
|
|
* @n: a &struct hlist_node to use as temporary storage
|
2005-04-16 22:20:36 +00:00
|
|
|
* @head: the head for your list.
|
|
|
|
* @member: the name of the hlist_node within the struct.
|
|
|
|
*/
|
hlist: drop the node parameter from iterators
I'm not sure why, but the hlist for each entry iterators were conceived
list_for_each_entry(pos, head, member)
The hlist ones were greedy and wanted an extra parameter:
hlist_for_each_entry(tpos, pos, head, member)
Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.
Besides the semantic patch, there was some manual work required:
- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.
The semantic patch which is mostly the work of Peter Senna Tschudin is here:
@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
type T;
expression a,c,d,e;
identifier b;
statement S;
@@
-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>
[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-02-28 01:06:00 +00:00
|
|
|
#define hlist_for_each_entry_safe(pos, n, head, member) \
|
|
|
|
for (pos = hlist_entry_safe((head)->first, typeof(*pos), member);\
|
|
|
|
pos && ({ n = pos->member.next; 1; }); \
|
|
|
|
pos = hlist_entry_safe(n, typeof(*pos), member))
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2024-01-04 16:49:33 +00:00
|
|
|
/**
|
|
|
|
* hlist_count_nodes - count nodes in the hlist
|
|
|
|
* @head: the head for your hlist.
|
|
|
|
*/
|
|
|
|
static inline size_t hlist_count_nodes(struct hlist_head *head)
|
|
|
|
{
|
|
|
|
struct hlist_node *pos;
|
|
|
|
size_t count = 0;
|
|
|
|
|
|
|
|
hlist_for_each(pos, head)
|
|
|
|
count++;
|
|
|
|
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#endif
|