2019-05-19 12:07:45 +00:00
|
|
|
# SPDX-License-Identifier: GPL-2.0-only
|
2005-04-16 22:20:36 +00:00
|
|
|
#
|
|
|
|
# Network device configuration
|
|
|
|
#
|
|
|
|
|
2007-06-13 19:48:53 +00:00
|
|
|
menuconfig NETDEVICES
|
2006-01-19 01:42:59 +00:00
|
|
|
default y if UML
|
2007-07-22 02:11:35 +00:00
|
|
|
depends on NET
|
2005-04-16 22:20:36 +00:00
|
|
|
bool "Network device support"
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2005-04-16 22:20:36 +00:00
|
|
|
You can say N here if you don't intend to connect your Linux box to
|
|
|
|
any other computer at all.
|
|
|
|
|
|
|
|
You'll have to say Y if your computer contains a network card that
|
|
|
|
you want to use under Linux. If you are going to run SLIP or PPP over
|
|
|
|
telephone line or null modem cable you need say Y here. Connecting
|
|
|
|
two machines with parallel ports using PLIP needs this, as well as
|
|
|
|
AX.25/KISS for sending Internet traffic over amateur radio links.
|
|
|
|
|
|
|
|
See also "The Linux Network Administrator's Guide" by Olaf Kirch and
|
|
|
|
Terry Dawson. Available at <http://www.tldp.org/guides.html>.
|
|
|
|
|
|
|
|
If unsure, say Y.
|
|
|
|
|
2006-09-26 06:11:21 +00:00
|
|
|
# All the following symbols are dependent on NETDEVICES - do not repeat
|
|
|
|
# that for each of the symbols.
|
|
|
|
if NETDEVICES
|
2005-07-27 20:04:35 +00:00
|
|
|
|
2013-06-18 02:24:51 +00:00
|
|
|
config MII
|
|
|
|
tristate
|
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
config NET_CORE
|
|
|
|
default y
|
|
|
|
bool "Network core driver support"
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2011-08-23 07:42:10 +00:00
|
|
|
You can say N here if you do not intend to use any of the
|
|
|
|
networking core drivers (i.e. VLAN, bridging, bonding, etc.)
|
|
|
|
|
|
|
|
if NET_CORE
|
|
|
|
|
|
|
|
config BONDING
|
|
|
|
tristate "Bonding driver support"
|
|
|
|
depends on INET
|
|
|
|
depends on IPV6 || IPV6=n
|
2021-01-25 11:31:59 +00:00
|
|
|
depends on TLS || TLS_DEVICE=n
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2011-08-23 07:42:10 +00:00
|
|
|
Say 'Y' or 'M' if you wish to be able to 'bond' multiple Ethernet
|
|
|
|
Channels together. This is called 'Etherchannel' by Cisco,
|
|
|
|
'Trunking' by Sun, 802.3ad by the IEEE, and 'Bonding' in Linux.
|
|
|
|
|
|
|
|
The driver supports multiple bonding modes to allow for both high
|
|
|
|
performance and high availability operation.
|
|
|
|
|
2020-04-27 22:01:24 +00:00
|
|
|
Refer to <file:Documentation/networking/bonding.rst> for more
|
2011-08-23 07:42:10 +00:00
|
|
|
information.
|
|
|
|
|
2006-01-09 06:34:25 +00:00
|
|
|
To compile this driver as a module, choose M here: the module
|
2011-08-23 07:42:10 +00:00
|
|
|
will be called bonding.
|
2006-01-09 06:34:25 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
config DUMMY
|
|
|
|
tristate "Dummy net driver support"
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2005-04-16 22:20:36 +00:00
|
|
|
This is essentially a bit-bucket device (i.e. traffic you send to
|
|
|
|
this device is consigned into oblivion) with a configurable IP
|
|
|
|
address. It is most commonly used in order to make your currently
|
|
|
|
inactive SLIP address seem like a real address for local programs.
|
2016-04-23 12:58:03 +00:00
|
|
|
If you use SLIP or PPP, you might want to say Y here. It won't
|
|
|
|
enlarge your kernel. What a deal. Read about it in the Network
|
2005-04-16 22:20:36 +00:00
|
|
|
Administrator's Guide, available from
|
|
|
|
<http://www.tldp.org/docs.html#guide>.
|
|
|
|
|
|
|
|
To compile this driver as a module, choose M here: the module
|
2012-05-14 03:57:31 +00:00
|
|
|
will be called dummy.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
net: WireGuard secure network tunnel
WireGuard is a layer 3 secure networking tunnel made specifically for
the kernel, that aims to be much simpler and easier to audit than IPsec.
Extensive documentation and description of the protocol and
considerations, along with formal proofs of the cryptography, are
available at:
* https://www.wireguard.com/
* https://www.wireguard.com/papers/wireguard.pdf
This commit implements WireGuard as a simple network device driver,
accessible in the usual RTNL way used by virtual network drivers. It
makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of
networking subsystem APIs. It has a somewhat novel multicore queueing
system designed for maximum throughput and minimal latency of encryption
operations, but it is implemented modestly using workqueues and NAPI.
Configuration is done via generic Netlink, and following a review from
the Netlink maintainer a year ago, several high profile userspace tools
have already implemented the API.
This commit also comes with several different tests, both in-kernel
tests and out-of-kernel tests based on network namespaces, taking profit
of the fact that sockets used by WireGuard intentionally stay in the
namespace the WireGuard interface was originally created, exactly like
the semantics of userspace tun devices. See wireguard.com/netns/ for
pictures and examples.
The source code is fairly short, but rather than combining everything
into a single file, WireGuard is developed as cleanly separable files,
making auditing and comprehension easier. Things are laid out as
follows:
* noise.[ch], cookie.[ch], messages.h: These implement the bulk of the
cryptographic aspects of the protocol, and are mostly data-only in
nature, taking in buffers of bytes and spitting out buffers of
bytes. They also handle reference counting for their various shared
pieces of data, like keys and key lists.
* ratelimiter.[ch]: Used as an integral part of cookie.[ch] for
ratelimiting certain types of cryptographic operations in accordance
with particular WireGuard semantics.
* allowedips.[ch], peerlookup.[ch]: The main lookup structures of
WireGuard, the former being trie-like with particular semantics, an
integral part of the design of the protocol, and the latter just
being nice helper functions around the various hashtables we use.
* device.[ch]: Implementation of functions for the netdevice and for
rtnl, responsible for maintaining the life of a given interface and
wiring it up to the rest of WireGuard.
* peer.[ch]: Each interface has a list of peers, with helper functions
available here for creation, destruction, and reference counting.
* socket.[ch]: Implementation of functions related to udp_socket and
the general set of kernel socket APIs, for sending and receiving
ciphertext UDP packets, and taking care of WireGuard-specific sticky
socket routing semantics for the automatic roaming.
* netlink.[ch]: Userspace API entry point for configuring WireGuard
peers and devices. The API has been implemented by several userspace
tools and network management utility, and the WireGuard project
distributes the basic wg(8) tool.
* queueing.[ch]: Shared function on the rx and tx path for handling
the various queues used in the multicore algorithms.
* send.c: Handles encrypting outgoing packets in parallel on
multiple cores, before sending them in order on a single core, via
workqueues and ring buffers. Also handles sending handshake and cookie
messages as part of the protocol, in parallel.
* receive.c: Handles decrypting incoming packets in parallel on
multiple cores, before passing them off in order to be ingested via
the rest of the networking subsystem with GRO via the typical NAPI
poll function. Also handles receiving handshake and cookie messages
as part of the protocol, in parallel.
* timers.[ch]: Uses the timer wheel to implement protocol particular
event timeouts, and gives a set of very simple event-driven entry
point functions for callers.
* main.c, version.h: Initialization and deinitialization of the module.
* selftest/*.h: Runtime unit tests for some of the most security
sensitive functions.
* tools/testing/selftests/wireguard/netns.sh: Aforementioned testing
script using network namespaces.
This commit aims to be as self-contained as possible, implementing
WireGuard as a standalone module not needing much special handling or
coordination from the network subsystem. I expect for future
optimizations to the network stack to positively improve WireGuard, and
vice-versa, but for the time being, this exists as intentionally
standalone.
We introduce a menu option for CONFIG_WIREGUARD, as well as providing a
verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: David Miller <davem@davemloft.net>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-08 23:27:34 +00:00
|
|
|
config WIREGUARD
|
|
|
|
tristate "WireGuard secure network tunnel"
|
|
|
|
depends on NET && INET
|
|
|
|
depends on IPV6 || !IPV6
|
2022-09-15 15:04:00 +00:00
|
|
|
depends on !KMSAN # KMSAN doesn't support the crypto configs below
|
net: WireGuard secure network tunnel
WireGuard is a layer 3 secure networking tunnel made specifically for
the kernel, that aims to be much simpler and easier to audit than IPsec.
Extensive documentation and description of the protocol and
considerations, along with formal proofs of the cryptography, are
available at:
* https://www.wireguard.com/
* https://www.wireguard.com/papers/wireguard.pdf
This commit implements WireGuard as a simple network device driver,
accessible in the usual RTNL way used by virtual network drivers. It
makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of
networking subsystem APIs. It has a somewhat novel multicore queueing
system designed for maximum throughput and minimal latency of encryption
operations, but it is implemented modestly using workqueues and NAPI.
Configuration is done via generic Netlink, and following a review from
the Netlink maintainer a year ago, several high profile userspace tools
have already implemented the API.
This commit also comes with several different tests, both in-kernel
tests and out-of-kernel tests based on network namespaces, taking profit
of the fact that sockets used by WireGuard intentionally stay in the
namespace the WireGuard interface was originally created, exactly like
the semantics of userspace tun devices. See wireguard.com/netns/ for
pictures and examples.
The source code is fairly short, but rather than combining everything
into a single file, WireGuard is developed as cleanly separable files,
making auditing and comprehension easier. Things are laid out as
follows:
* noise.[ch], cookie.[ch], messages.h: These implement the bulk of the
cryptographic aspects of the protocol, and are mostly data-only in
nature, taking in buffers of bytes and spitting out buffers of
bytes. They also handle reference counting for their various shared
pieces of data, like keys and key lists.
* ratelimiter.[ch]: Used as an integral part of cookie.[ch] for
ratelimiting certain types of cryptographic operations in accordance
with particular WireGuard semantics.
* allowedips.[ch], peerlookup.[ch]: The main lookup structures of
WireGuard, the former being trie-like with particular semantics, an
integral part of the design of the protocol, and the latter just
being nice helper functions around the various hashtables we use.
* device.[ch]: Implementation of functions for the netdevice and for
rtnl, responsible for maintaining the life of a given interface and
wiring it up to the rest of WireGuard.
* peer.[ch]: Each interface has a list of peers, with helper functions
available here for creation, destruction, and reference counting.
* socket.[ch]: Implementation of functions related to udp_socket and
the general set of kernel socket APIs, for sending and receiving
ciphertext UDP packets, and taking care of WireGuard-specific sticky
socket routing semantics for the automatic roaming.
* netlink.[ch]: Userspace API entry point for configuring WireGuard
peers and devices. The API has been implemented by several userspace
tools and network management utility, and the WireGuard project
distributes the basic wg(8) tool.
* queueing.[ch]: Shared function on the rx and tx path for handling
the various queues used in the multicore algorithms.
* send.c: Handles encrypting outgoing packets in parallel on
multiple cores, before sending them in order on a single core, via
workqueues and ring buffers. Also handles sending handshake and cookie
messages as part of the protocol, in parallel.
* receive.c: Handles decrypting incoming packets in parallel on
multiple cores, before passing them off in order to be ingested via
the rest of the networking subsystem with GRO via the typical NAPI
poll function. Also handles receiving handshake and cookie messages
as part of the protocol, in parallel.
* timers.[ch]: Uses the timer wheel to implement protocol particular
event timeouts, and gives a set of very simple event-driven entry
point functions for callers.
* main.c, version.h: Initialization and deinitialization of the module.
* selftest/*.h: Runtime unit tests for some of the most security
sensitive functions.
* tools/testing/selftests/wireguard/netns.sh: Aforementioned testing
script using network namespaces.
This commit aims to be as self-contained as possible, implementing
WireGuard as a standalone module not needing much special handling or
coordination from the network subsystem. I expect for future
optimizations to the network stack to positively improve WireGuard, and
vice-versa, but for the time being, this exists as intentionally
standalone.
We introduce a menu option for CONFIG_WIREGUARD, as well as providing a
verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: David Miller <davem@davemloft.net>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-08 23:27:34 +00:00
|
|
|
select NET_UDP_TUNNEL
|
|
|
|
select DST_CACHE
|
|
|
|
select CRYPTO
|
|
|
|
select CRYPTO_LIB_CURVE25519
|
|
|
|
select CRYPTO_LIB_CHACHA20POLY1305
|
|
|
|
select CRYPTO_CHACHA20_X86_64 if X86 && 64BIT
|
|
|
|
select CRYPTO_POLY1305_X86_64 if X86 && 64BIT
|
|
|
|
select CRYPTO_BLAKE2S_X86 if X86 && 64BIT
|
|
|
|
select CRYPTO_CURVE25519_X86 if X86 && 64BIT
|
2021-02-22 16:25:49 +00:00
|
|
|
select CRYPTO_CHACHA20_NEON if ARM || (ARM64 && KERNEL_MODE_NEON)
|
net: WireGuard secure network tunnel
WireGuard is a layer 3 secure networking tunnel made specifically for
the kernel, that aims to be much simpler and easier to audit than IPsec.
Extensive documentation and description of the protocol and
considerations, along with formal proofs of the cryptography, are
available at:
* https://www.wireguard.com/
* https://www.wireguard.com/papers/wireguard.pdf
This commit implements WireGuard as a simple network device driver,
accessible in the usual RTNL way used by virtual network drivers. It
makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of
networking subsystem APIs. It has a somewhat novel multicore queueing
system designed for maximum throughput and minimal latency of encryption
operations, but it is implemented modestly using workqueues and NAPI.
Configuration is done via generic Netlink, and following a review from
the Netlink maintainer a year ago, several high profile userspace tools
have already implemented the API.
This commit also comes with several different tests, both in-kernel
tests and out-of-kernel tests based on network namespaces, taking profit
of the fact that sockets used by WireGuard intentionally stay in the
namespace the WireGuard interface was originally created, exactly like
the semantics of userspace tun devices. See wireguard.com/netns/ for
pictures and examples.
The source code is fairly short, but rather than combining everything
into a single file, WireGuard is developed as cleanly separable files,
making auditing and comprehension easier. Things are laid out as
follows:
* noise.[ch], cookie.[ch], messages.h: These implement the bulk of the
cryptographic aspects of the protocol, and are mostly data-only in
nature, taking in buffers of bytes and spitting out buffers of
bytes. They also handle reference counting for their various shared
pieces of data, like keys and key lists.
* ratelimiter.[ch]: Used as an integral part of cookie.[ch] for
ratelimiting certain types of cryptographic operations in accordance
with particular WireGuard semantics.
* allowedips.[ch], peerlookup.[ch]: The main lookup structures of
WireGuard, the former being trie-like with particular semantics, an
integral part of the design of the protocol, and the latter just
being nice helper functions around the various hashtables we use.
* device.[ch]: Implementation of functions for the netdevice and for
rtnl, responsible for maintaining the life of a given interface and
wiring it up to the rest of WireGuard.
* peer.[ch]: Each interface has a list of peers, with helper functions
available here for creation, destruction, and reference counting.
* socket.[ch]: Implementation of functions related to udp_socket and
the general set of kernel socket APIs, for sending and receiving
ciphertext UDP packets, and taking care of WireGuard-specific sticky
socket routing semantics for the automatic roaming.
* netlink.[ch]: Userspace API entry point for configuring WireGuard
peers and devices. The API has been implemented by several userspace
tools and network management utility, and the WireGuard project
distributes the basic wg(8) tool.
* queueing.[ch]: Shared function on the rx and tx path for handling
the various queues used in the multicore algorithms.
* send.c: Handles encrypting outgoing packets in parallel on
multiple cores, before sending them in order on a single core, via
workqueues and ring buffers. Also handles sending handshake and cookie
messages as part of the protocol, in parallel.
* receive.c: Handles decrypting incoming packets in parallel on
multiple cores, before passing them off in order to be ingested via
the rest of the networking subsystem with GRO via the typical NAPI
poll function. Also handles receiving handshake and cookie messages
as part of the protocol, in parallel.
* timers.[ch]: Uses the timer wheel to implement protocol particular
event timeouts, and gives a set of very simple event-driven entry
point functions for callers.
* main.c, version.h: Initialization and deinitialization of the module.
* selftest/*.h: Runtime unit tests for some of the most security
sensitive functions.
* tools/testing/selftests/wireguard/netns.sh: Aforementioned testing
script using network namespaces.
This commit aims to be as self-contained as possible, implementing
WireGuard as a standalone module not needing much special handling or
coordination from the network subsystem. I expect for future
optimizations to the network stack to positively improve WireGuard, and
vice-versa, but for the time being, this exists as intentionally
standalone.
We introduce a menu option for CONFIG_WIREGUARD, as well as providing a
verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: David Miller <davem@davemloft.net>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-08 23:27:34 +00:00
|
|
|
select CRYPTO_POLY1305_NEON if ARM64 && KERNEL_MODE_NEON
|
|
|
|
select CRYPTO_POLY1305_ARM if ARM
|
2020-12-23 08:10:00 +00:00
|
|
|
select CRYPTO_BLAKE2S_ARM if ARM
|
net: WireGuard secure network tunnel
WireGuard is a layer 3 secure networking tunnel made specifically for
the kernel, that aims to be much simpler and easier to audit than IPsec.
Extensive documentation and description of the protocol and
considerations, along with formal proofs of the cryptography, are
available at:
* https://www.wireguard.com/
* https://www.wireguard.com/papers/wireguard.pdf
This commit implements WireGuard as a simple network device driver,
accessible in the usual RTNL way used by virtual network drivers. It
makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of
networking subsystem APIs. It has a somewhat novel multicore queueing
system designed for maximum throughput and minimal latency of encryption
operations, but it is implemented modestly using workqueues and NAPI.
Configuration is done via generic Netlink, and following a review from
the Netlink maintainer a year ago, several high profile userspace tools
have already implemented the API.
This commit also comes with several different tests, both in-kernel
tests and out-of-kernel tests based on network namespaces, taking profit
of the fact that sockets used by WireGuard intentionally stay in the
namespace the WireGuard interface was originally created, exactly like
the semantics of userspace tun devices. See wireguard.com/netns/ for
pictures and examples.
The source code is fairly short, but rather than combining everything
into a single file, WireGuard is developed as cleanly separable files,
making auditing and comprehension easier. Things are laid out as
follows:
* noise.[ch], cookie.[ch], messages.h: These implement the bulk of the
cryptographic aspects of the protocol, and are mostly data-only in
nature, taking in buffers of bytes and spitting out buffers of
bytes. They also handle reference counting for their various shared
pieces of data, like keys and key lists.
* ratelimiter.[ch]: Used as an integral part of cookie.[ch] for
ratelimiting certain types of cryptographic operations in accordance
with particular WireGuard semantics.
* allowedips.[ch], peerlookup.[ch]: The main lookup structures of
WireGuard, the former being trie-like with particular semantics, an
integral part of the design of the protocol, and the latter just
being nice helper functions around the various hashtables we use.
* device.[ch]: Implementation of functions for the netdevice and for
rtnl, responsible for maintaining the life of a given interface and
wiring it up to the rest of WireGuard.
* peer.[ch]: Each interface has a list of peers, with helper functions
available here for creation, destruction, and reference counting.
* socket.[ch]: Implementation of functions related to udp_socket and
the general set of kernel socket APIs, for sending and receiving
ciphertext UDP packets, and taking care of WireGuard-specific sticky
socket routing semantics for the automatic roaming.
* netlink.[ch]: Userspace API entry point for configuring WireGuard
peers and devices. The API has been implemented by several userspace
tools and network management utility, and the WireGuard project
distributes the basic wg(8) tool.
* queueing.[ch]: Shared function on the rx and tx path for handling
the various queues used in the multicore algorithms.
* send.c: Handles encrypting outgoing packets in parallel on
multiple cores, before sending them in order on a single core, via
workqueues and ring buffers. Also handles sending handshake and cookie
messages as part of the protocol, in parallel.
* receive.c: Handles decrypting incoming packets in parallel on
multiple cores, before passing them off in order to be ingested via
the rest of the networking subsystem with GRO via the typical NAPI
poll function. Also handles receiving handshake and cookie messages
as part of the protocol, in parallel.
* timers.[ch]: Uses the timer wheel to implement protocol particular
event timeouts, and gives a set of very simple event-driven entry
point functions for callers.
* main.c, version.h: Initialization and deinitialization of the module.
* selftest/*.h: Runtime unit tests for some of the most security
sensitive functions.
* tools/testing/selftests/wireguard/netns.sh: Aforementioned testing
script using network namespaces.
This commit aims to be as self-contained as possible, implementing
WireGuard as a standalone module not needing much special handling or
coordination from the network subsystem. I expect for future
optimizations to the network stack to positively improve WireGuard, and
vice-versa, but for the time being, this exists as intentionally
standalone.
We introduce a menu option for CONFIG_WIREGUARD, as well as providing a
verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: David Miller <davem@davemloft.net>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-08 23:27:34 +00:00
|
|
|
select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
|
|
|
|
select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2
|
2021-03-03 01:16:04 +00:00
|
|
|
select CRYPTO_POLY1305_MIPS if MIPS
|
2022-07-07 00:31:57 +00:00
|
|
|
select CRYPTO_CHACHA_S390 if S390
|
net: WireGuard secure network tunnel
WireGuard is a layer 3 secure networking tunnel made specifically for
the kernel, that aims to be much simpler and easier to audit than IPsec.
Extensive documentation and description of the protocol and
considerations, along with formal proofs of the cryptography, are
available at:
* https://www.wireguard.com/
* https://www.wireguard.com/papers/wireguard.pdf
This commit implements WireGuard as a simple network device driver,
accessible in the usual RTNL way used by virtual network drivers. It
makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of
networking subsystem APIs. It has a somewhat novel multicore queueing
system designed for maximum throughput and minimal latency of encryption
operations, but it is implemented modestly using workqueues and NAPI.
Configuration is done via generic Netlink, and following a review from
the Netlink maintainer a year ago, several high profile userspace tools
have already implemented the API.
This commit also comes with several different tests, both in-kernel
tests and out-of-kernel tests based on network namespaces, taking profit
of the fact that sockets used by WireGuard intentionally stay in the
namespace the WireGuard interface was originally created, exactly like
the semantics of userspace tun devices. See wireguard.com/netns/ for
pictures and examples.
The source code is fairly short, but rather than combining everything
into a single file, WireGuard is developed as cleanly separable files,
making auditing and comprehension easier. Things are laid out as
follows:
* noise.[ch], cookie.[ch], messages.h: These implement the bulk of the
cryptographic aspects of the protocol, and are mostly data-only in
nature, taking in buffers of bytes and spitting out buffers of
bytes. They also handle reference counting for their various shared
pieces of data, like keys and key lists.
* ratelimiter.[ch]: Used as an integral part of cookie.[ch] for
ratelimiting certain types of cryptographic operations in accordance
with particular WireGuard semantics.
* allowedips.[ch], peerlookup.[ch]: The main lookup structures of
WireGuard, the former being trie-like with particular semantics, an
integral part of the design of the protocol, and the latter just
being nice helper functions around the various hashtables we use.
* device.[ch]: Implementation of functions for the netdevice and for
rtnl, responsible for maintaining the life of a given interface and
wiring it up to the rest of WireGuard.
* peer.[ch]: Each interface has a list of peers, with helper functions
available here for creation, destruction, and reference counting.
* socket.[ch]: Implementation of functions related to udp_socket and
the general set of kernel socket APIs, for sending and receiving
ciphertext UDP packets, and taking care of WireGuard-specific sticky
socket routing semantics for the automatic roaming.
* netlink.[ch]: Userspace API entry point for configuring WireGuard
peers and devices. The API has been implemented by several userspace
tools and network management utility, and the WireGuard project
distributes the basic wg(8) tool.
* queueing.[ch]: Shared function on the rx and tx path for handling
the various queues used in the multicore algorithms.
* send.c: Handles encrypting outgoing packets in parallel on
multiple cores, before sending them in order on a single core, via
workqueues and ring buffers. Also handles sending handshake and cookie
messages as part of the protocol, in parallel.
* receive.c: Handles decrypting incoming packets in parallel on
multiple cores, before passing them off in order to be ingested via
the rest of the networking subsystem with GRO via the typical NAPI
poll function. Also handles receiving handshake and cookie messages
as part of the protocol, in parallel.
* timers.[ch]: Uses the timer wheel to implement protocol particular
event timeouts, and gives a set of very simple event-driven entry
point functions for callers.
* main.c, version.h: Initialization and deinitialization of the module.
* selftest/*.h: Runtime unit tests for some of the most security
sensitive functions.
* tools/testing/selftests/wireguard/netns.sh: Aforementioned testing
script using network namespaces.
This commit aims to be as self-contained as possible, implementing
WireGuard as a standalone module not needing much special handling or
coordination from the network subsystem. I expect for future
optimizations to the network stack to positively improve WireGuard, and
vice-versa, but for the time being, this exists as intentionally
standalone.
We introduce a menu option for CONFIG_WIREGUARD, as well as providing a
verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Cc: David Miller <davem@davemloft.net>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-12-08 23:27:34 +00:00
|
|
|
help
|
|
|
|
WireGuard is a secure, fast, and easy to use replacement for IPSec
|
|
|
|
that uses modern cryptography and clever networking tricks. It's
|
|
|
|
designed to be fairly general purpose and abstract enough to fit most
|
|
|
|
use cases, while at the same time remaining extremely simple to
|
|
|
|
configure. See www.wireguard.com for more info.
|
|
|
|
|
|
|
|
It's safe to say Y or M here, as the driver is very lightweight and
|
|
|
|
is only in use when an administrator chooses to add an interface.
|
|
|
|
|
|
|
|
config WIREGUARD_DEBUG
|
|
|
|
bool "Debugging checks and verbose messages"
|
|
|
|
depends on WIREGUARD
|
|
|
|
help
|
|
|
|
This will write log messages for handshake and other events
|
|
|
|
that occur for a WireGuard interface. It will also perform some
|
|
|
|
extra validation checks and unit tests at various points. This is
|
|
|
|
only useful for debugging.
|
|
|
|
|
|
|
|
Say N here unless you know what you're doing.
|
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
config EQUALIZER
|
|
|
|
tristate "EQL (serial line load balancing) support"
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2011-08-23 07:42:10 +00:00
|
|
|
If you have two serial connections to some other computer (this
|
|
|
|
usually requires two modems and two telephone lines) and you use
|
|
|
|
SLIP (the protocol for sending Internet traffic over telephone
|
|
|
|
lines) or PPP (a better SLIP) on them, you can make them behave like
|
|
|
|
one double speed connection using this driver. Naturally, this has
|
|
|
|
to be supported at the other end as well, either with a similar EQL
|
|
|
|
Linux driver or with a Livingston Portmaster 2e.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
Say Y if you want this and read
|
2020-04-27 22:01:34 +00:00
|
|
|
<file:Documentation/networking/eql.rst>. You may also want to read
|
2011-08-23 07:42:10 +00:00
|
|
|
section 6.2 of the NET-3-HOWTO, available from
|
|
|
|
<http://www.tldp.org/docs.html#howto>.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
To compile this driver as a module, choose M here: the module
|
|
|
|
will be called eql. If unsure, say N.
|
|
|
|
|
|
|
|
config NET_FC
|
|
|
|
bool "Fibre Channel driver support"
|
|
|
|
depends on SCSI && PCI
|
|
|
|
help
|
|
|
|
Fibre Channel is a high speed serial protocol mainly used to connect
|
|
|
|
large storage devices to the computer; it is compatible with and
|
|
|
|
intended to replace SCSI.
|
|
|
|
|
|
|
|
If you intend to use Fibre Channel, you need to have a Fibre channel
|
|
|
|
adaptor card in your computer; say Y here and to the driver for your
|
|
|
|
adaptor below. You also should have said Y to "SCSI support" and
|
|
|
|
"SCSI generic support".
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
config IFB
|
|
|
|
tristate "Intermediate Functional Block support"
|
2021-10-26 05:15:32 +00:00
|
|
|
depends on NET_ACT_MIRRED || NFT_FWD_NETDEV
|
2020-03-25 12:47:18 +00:00
|
|
|
select NET_REDIRECT
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2011-08-23 07:42:10 +00:00
|
|
|
This is an intermediate driver that allows sharing of
|
|
|
|
resources.
|
2005-04-16 22:20:36 +00:00
|
|
|
To compile this driver as a module, choose M here: the module
|
2011-08-23 07:42:10 +00:00
|
|
|
will be called ifb. If you want to use more than one ifb
|
|
|
|
device at a time, you need to compile this driver as a module.
|
|
|
|
Instead of 'ifb', the devices will then be called 'ifb0',
|
|
|
|
'ifb1' etc.
|
|
|
|
Look at the iproute2 documentation directory for usage etc
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2011-11-11 22:16:48 +00:00
|
|
|
source "drivers/net/team/Kconfig"
|
|
|
|
|
2007-07-15 01:55:06 +00:00
|
|
|
config MACVLAN
|
2012-10-02 18:17:55 +00:00
|
|
|
tristate "MAC-VLAN support"
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2007-07-15 01:55:06 +00:00
|
|
|
This allows one to create virtual interfaces that map packets to
|
|
|
|
or from specific MAC addresses to a particular interface.
|
|
|
|
|
2008-02-27 01:52:05 +00:00
|
|
|
Macvlan devices can be added using the "ip" command from the
|
|
|
|
iproute2 package starting with the iproute2-2.6.23 release:
|
|
|
|
|
|
|
|
"ip link add link <real dev> [ address MAC ] [ NAME ] type macvlan"
|
|
|
|
|
2007-07-15 01:55:06 +00:00
|
|
|
To compile this driver as a module, choose M here: the module
|
|
|
|
will be called macvlan.
|
|
|
|
|
2010-01-30 12:24:26 +00:00
|
|
|
config MACVTAP
|
2012-10-02 18:17:55 +00:00
|
|
|
tristate "MAC-VLAN based tap driver"
|
2010-01-30 12:24:26 +00:00
|
|
|
depends on MACVLAN
|
2014-10-31 03:10:31 +00:00
|
|
|
depends on INET
|
2017-02-11 00:03:51 +00:00
|
|
|
select TAP
|
2010-01-30 12:24:26 +00:00
|
|
|
help
|
|
|
|
This adds a specialized tap character device driver that is based
|
|
|
|
on the MAC-VLAN network interface, called macvtap. A macvtap device
|
|
|
|
can be added in the same way as a macvlan device, using 'type
|
2014-02-10 20:40:51 +00:00
|
|
|
macvtap', and then be accessed through the tap user space interface.
|
2010-01-30 12:24:26 +00:00
|
|
|
|
|
|
|
To compile this driver as a module, choose M here: the module
|
|
|
|
will be called macvtap.
|
|
|
|
|
2019-02-08 12:55:31 +00:00
|
|
|
config IPVLAN_L3S
|
|
|
|
depends on NETFILTER
|
2019-02-13 16:55:02 +00:00
|
|
|
depends on IPVLAN
|
2019-02-08 12:55:31 +00:00
|
|
|
def_bool y
|
|
|
|
select NET_L3_MASTER_DEV
|
2014-11-24 07:07:46 +00:00
|
|
|
|
|
|
|
config IPVLAN
|
2019-11-21 13:28:28 +00:00
|
|
|
tristate "IP-VLAN support"
|
|
|
|
depends on INET
|
|
|
|
depends on IPV6 || !IPV6
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2019-11-21 13:28:28 +00:00
|
|
|
This allows one to create virtual devices off of a main interface
|
|
|
|
and packets will be delivered based on the dest L3 (IPv6/IPv4 addr)
|
|
|
|
on packets. All interfaces (including the main interface) share L2
|
|
|
|
making it transparent to the connected L2 switch.
|
2014-11-24 07:07:46 +00:00
|
|
|
|
2019-11-21 13:28:28 +00:00
|
|
|
Ipvlan devices can be added using the "ip" command from the
|
|
|
|
iproute2 package starting with the iproute2-3.19 release:
|
2014-11-24 07:07:46 +00:00
|
|
|
|
2019-11-21 13:28:28 +00:00
|
|
|
"ip link add link <main-dev> [ NAME ] type ipvlan"
|
2014-11-24 07:07:46 +00:00
|
|
|
|
2019-11-21 13:28:28 +00:00
|
|
|
To compile this driver as a module, choose M here: the module
|
|
|
|
will be called ipvlan.
|
2014-11-24 07:07:46 +00:00
|
|
|
|
2017-02-11 00:03:52 +00:00
|
|
|
config IPVTAP
|
|
|
|
tristate "IP-VLAN based tap driver"
|
|
|
|
depends on IPVLAN
|
|
|
|
depends on INET
|
|
|
|
select TAP
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2017-02-11 00:03:52 +00:00
|
|
|
This adds a specialized tap character device driver that is based
|
|
|
|
on the IP-VLAN network interface, called ipvtap. An ipvtap device
|
|
|
|
can be added in the same way as a ipvlan device, using 'type
|
|
|
|
ipvtap', and then be accessed through the tap user space interface.
|
|
|
|
|
|
|
|
To compile this driver as a module, choose M here: the module
|
|
|
|
will be called ipvtap.
|
2014-11-24 07:07:46 +00:00
|
|
|
|
2012-10-01 12:32:35 +00:00
|
|
|
config VXLAN
|
2019-11-21 13:28:28 +00:00
|
|
|
tristate "Virtual eXtensible Local Area Network (VXLAN)"
|
|
|
|
depends on INET
|
|
|
|
select NET_UDP_TUNNEL
|
|
|
|
select GRO_CELLS
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2012-10-01 12:32:35 +00:00
|
|
|
This allows one to create vxlan virtual interfaces that provide
|
|
|
|
Layer 2 Networks over Layer 3 Networks. VXLAN is often used
|
|
|
|
to tunnel virtual network infrastructure in virtualized environments.
|
|
|
|
For more information see:
|
|
|
|
http://tools.ietf.org/html/draft-mahalingam-dutt-dcops-vxlan-02
|
|
|
|
|
|
|
|
To compile this driver as a module, choose M here: the module
|
|
|
|
will be called vxlan.
|
|
|
|
|
2015-05-13 16:57:30 +00:00
|
|
|
config GENEVE
|
2019-11-21 13:28:28 +00:00
|
|
|
tristate "Generic Network Virtualization Encapsulation"
|
|
|
|
depends on INET
|
|
|
|
depends on IPV6 || !IPV6
|
|
|
|
select NET_UDP_TUNNEL
|
|
|
|
select GRO_CELLS
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2015-05-13 16:57:30 +00:00
|
|
|
This allows one to create geneve virtual interfaces that provide
|
|
|
|
Layer 2 Networks over Layer 3 Networks. GENEVE is often used
|
|
|
|
to tunnel virtual network infrastructure in virtualized environments.
|
|
|
|
For more information see:
|
|
|
|
http://tools.ietf.org/html/draft-gross-geneve-02
|
|
|
|
|
|
|
|
To compile this driver as a module, choose M here: the module
|
|
|
|
will be called geneve.
|
|
|
|
|
2020-02-24 05:27:50 +00:00
|
|
|
config BAREUDP
|
2021-06-08 01:26:48 +00:00
|
|
|
tristate "Bare UDP Encapsulation"
|
|
|
|
depends on INET
|
|
|
|
depends on IPV6 || !IPV6
|
|
|
|
select NET_UDP_TUNNEL
|
|
|
|
select GRO_CELLS
|
|
|
|
help
|
|
|
|
This adds a bare UDP tunnel module for tunnelling different
|
|
|
|
kinds of traffic like MPLS, IP, etc. inside a UDP tunnel.
|
|
|
|
|
|
|
|
To compile this driver as a module, choose M here: the module
|
|
|
|
will be called bareudp.
|
2020-02-24 05:27:50 +00:00
|
|
|
|
2016-05-08 22:55:48 +00:00
|
|
|
config GTP
|
|
|
|
tristate "GPRS Tunneling Protocol datapath (GTP-U)"
|
2019-03-16 00:00:50 +00:00
|
|
|
depends on INET
|
|
|
|
select NET_UDP_TUNNEL
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2016-05-08 22:55:48 +00:00
|
|
|
This allows one to create gtp virtual interfaces that provide
|
|
|
|
the GPRS Tunneling Protocol datapath (GTP-U). This tunneling protocol
|
|
|
|
is used to prevent subscribers from accessing mobile carrier core
|
|
|
|
network infrastructure. This driver requires a userspace software that
|
|
|
|
implements the signaling protocol (GTP-C) to update its PDP context
|
|
|
|
base, such as OpenGGSN <http://git.osmocom.org/openggsn/). This
|
|
|
|
tunneling protocol is implemented according to the GSM TS 09.60 and
|
|
|
|
3GPP TS 29.060 standards.
|
|
|
|
|
|
|
|
To compile this drivers as a module, choose M here: the module
|
2020-12-04 19:45:49 +00:00
|
|
|
will be called gtp.
|
2016-05-08 22:55:48 +00:00
|
|
|
|
2024-03-27 15:23:55 +00:00
|
|
|
config PFCP
|
|
|
|
tristate "Packet Forwarding Control Protocol (PFCP)"
|
|
|
|
depends on INET
|
|
|
|
select NET_UDP_TUNNEL
|
|
|
|
help
|
|
|
|
This allows one to create PFCP virtual interfaces that allows to
|
|
|
|
set up software and hardware offload of PFCP packets.
|
|
|
|
Note that this module does not support PFCP protocol in the kernel space.
|
|
|
|
There is no support for parsing any PFCP messages.
|
|
|
|
|
|
|
|
To compile this drivers as a module, choose M here: the module
|
|
|
|
will be called pfcp.
|
|
|
|
|
2021-10-31 16:00:02 +00:00
|
|
|
config AMT
|
|
|
|
tristate "Automatic Multicast Tunneling (AMT)"
|
|
|
|
depends on INET && IP_MULTICAST
|
2021-11-08 11:12:24 +00:00
|
|
|
depends on IPV6 || !IPV6
|
2021-10-31 16:00:02 +00:00
|
|
|
select NET_UDP_TUNNEL
|
|
|
|
help
|
|
|
|
This allows one to create AMT(Automatic Multicast Tunneling)
|
|
|
|
virtual interfaces that provide multicast tunneling.
|
|
|
|
There are two roles, Gateway, and Relay.
|
|
|
|
Gateway Encapsulates IGMP/MLD traffic from listeners to the Relay.
|
|
|
|
Gateway Decapsulates multicast traffic from the Relay to Listeners.
|
|
|
|
Relay Encapsulates multicast traffic from Sources to Gateway.
|
|
|
|
Relay Decapsulates IGMP/MLD traffic from Gateway.
|
|
|
|
|
|
|
|
To compile this drivers as a module, choose M here: the module
|
|
|
|
will be called amt.
|
|
|
|
|
2016-03-11 17:07:33 +00:00
|
|
|
config MACSEC
|
|
|
|
tristate "IEEE 802.1AE MAC-level encryption (MACsec)"
|
2016-04-17 09:19:55 +00:00
|
|
|
select CRYPTO
|
2016-03-11 17:07:33 +00:00
|
|
|
select CRYPTO_AES
|
|
|
|
select CRYPTO_GCM
|
2017-02-07 23:37:15 +00:00
|
|
|
select GRO_CELLS
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2016-03-11 17:07:33 +00:00
|
|
|
MACsec is an encryption standard for Ethernet.
|
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
config NETCONSOLE
|
|
|
|
tristate "Network console logging support"
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2019-11-21 13:28:28 +00:00
|
|
|
If you want to log kernel messages over the network, enable this.
|
2020-04-30 16:04:02 +00:00
|
|
|
See <file:Documentation/networking/netconsole.rst> for details.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
config NETCONSOLE_DYNAMIC
|
|
|
|
bool "Dynamic reconfiguration of logging targets"
|
|
|
|
depends on NETCONSOLE && SYSFS && CONFIGFS_FS && \
|
|
|
|
!(NETCONSOLE=y && CONFIGFS_FS=m)
|
|
|
|
help
|
|
|
|
This option enables the ability to dynamically reconfigure target
|
|
|
|
parameters (interface, IP addresses, port numbers, MAC addresses)
|
|
|
|
at runtime through a userspace interface exported using configfs.
|
2020-04-30 16:04:02 +00:00
|
|
|
See <file:Documentation/networking/netconsole.rst> for details.
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2023-08-11 09:31:58 +00:00
|
|
|
config NETCONSOLE_EXTENDED_LOG
|
|
|
|
bool "Set kernel extended message by default"
|
|
|
|
depends on NETCONSOLE
|
|
|
|
default n
|
|
|
|
help
|
|
|
|
Set extended log support for netconsole message. If this option is
|
|
|
|
set, log messages are transmitted with extended metadata header in a
|
|
|
|
format similar to /dev/kmsg. See
|
|
|
|
<file:Documentation/networking/netconsole.rst> for details.
|
|
|
|
|
|
|
|
config NETCONSOLE_PREPEND_RELEASE
|
|
|
|
bool "Prepend kernel release version in the message by default"
|
|
|
|
depends on NETCONSOLE_EXTENDED_LOG
|
|
|
|
default n
|
|
|
|
help
|
|
|
|
Set kernel release to be prepended to each netconsole message by
|
|
|
|
default. If this option is set, the kernel release is prepended into
|
|
|
|
the first field of every netconsole message, so, the netconsole
|
|
|
|
server/peer can easily identify what kernel release is logging each
|
|
|
|
message. See <file:Documentation/networking/netconsole.rst> for
|
|
|
|
details.
|
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
config NETPOLL
|
|
|
|
def_bool NETCONSOLE
|
|
|
|
|
|
|
|
config NET_POLL_CONTROLLER
|
|
|
|
def_bool NETPOLL
|
|
|
|
|
2012-11-17 02:27:13 +00:00
|
|
|
config NTB_NETDEV
|
2015-05-07 10:45:21 +00:00
|
|
|
tristate "Virtual Ethernet over NTB Transport"
|
|
|
|
depends on NTB_TRANSPORT
|
2012-11-17 02:27:13 +00:00
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
config RIONET
|
|
|
|
tristate "RapidIO Ethernet over messaging driver support"
|
|
|
|
depends on RAPIDIO
|
|
|
|
|
|
|
|
config RIONET_TX_SIZE
|
|
|
|
int "Number of outbound queue entries"
|
|
|
|
depends on RIONET
|
|
|
|
default "128"
|
|
|
|
|
|
|
|
config RIONET_RX_SIZE
|
|
|
|
int "Number of inbound queue entries"
|
|
|
|
depends on RIONET
|
|
|
|
default "128"
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
config TUN
|
|
|
|
tristate "Universal TUN/TAP device driver support"
|
2014-10-31 03:10:31 +00:00
|
|
|
depends on INET
|
2005-04-16 22:20:36 +00:00
|
|
|
select CRC32
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2005-04-16 22:20:36 +00:00
|
|
|
TUN/TAP provides packet reception and transmission for user space
|
|
|
|
programs. It can be viewed as a simple Point-to-Point or Ethernet
|
|
|
|
device, which instead of receiving packets from a physical media,
|
|
|
|
receives them from user space program and instead of sending packets
|
|
|
|
via physical media writes them to the user space program.
|
|
|
|
|
|
|
|
When a program opens /dev/net/tun, driver creates and registers
|
|
|
|
corresponding net device tunX or tapX. After a program closed above
|
|
|
|
devices, driver will automatically delete tunXX or tapXX device and
|
|
|
|
all routes corresponding to it.
|
|
|
|
|
2020-05-01 14:44:23 +00:00
|
|
|
Please read <file:Documentation/networking/tuntap.rst> for more
|
2005-04-16 22:20:36 +00:00
|
|
|
information.
|
|
|
|
|
|
|
|
To compile this driver as a module, choose M here: the module
|
|
|
|
will be called tun.
|
|
|
|
|
|
|
|
If you don't know what to use this for, you don't need it.
|
|
|
|
|
2017-02-11 00:03:51 +00:00
|
|
|
config TAP
|
|
|
|
tristate
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2017-02-11 00:03:51 +00:00
|
|
|
This option is selected by any driver implementing tap user space
|
|
|
|
interface for a virtual interface to re-use core tap functionality.
|
|
|
|
|
2015-04-24 12:50:36 +00:00
|
|
|
config TUN_VNET_CROSS_LE
|
|
|
|
bool "Support for cross-endian vnet headers on little-endian kernels"
|
|
|
|
default n
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2015-04-24 12:50:36 +00:00
|
|
|
This option allows TUN/TAP and MACVTAP device drivers in a
|
|
|
|
little-endian kernel to parse vnet headers that come from a
|
|
|
|
big-endian legacy virtio device.
|
|
|
|
|
|
|
|
Userspace programs can control the feature using the TUNSETVNETBE
|
|
|
|
and TUNGETVNETBE ioctls.
|
|
|
|
|
|
|
|
Unless you have a little-endian system hosting a big-endian virtual
|
|
|
|
machine with a legacy virtio NIC, you should say N.
|
|
|
|
|
2007-09-25 23:14:46 +00:00
|
|
|
config VETH
|
2007-11-07 04:35:55 +00:00
|
|
|
tristate "Virtual ethernet pair device"
|
2023-04-22 18:54:32 +00:00
|
|
|
select PAGE_POOL
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2007-11-07 04:35:55 +00:00
|
|
|
This device is a local ethernet tunnel. Devices are created in pairs.
|
|
|
|
When one end receives the packet it appears on its pair and vice
|
|
|
|
versa.
|
2007-09-25 23:14:46 +00:00
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
config VIRTIO_NET
|
2012-10-02 18:17:55 +00:00
|
|
|
tristate "Virtio network driver"
|
|
|
|
depends on VIRTIO
|
2018-05-24 16:55:17 +00:00
|
|
|
select NET_FAILOVER
|
2023-12-11 10:36:07 +00:00
|
|
|
select DIMLIB
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2011-08-23 07:42:10 +00:00
|
|
|
This is the virtual network driver for virtio. It can be used with
|
2017-08-16 17:31:57 +00:00
|
|
|
QEMU based VMMs (like KVM or Xen). Say Y or M.
|
2011-08-23 07:42:10 +00:00
|
|
|
|
packet: nlmon: virtual netlink monitoring device for packet sockets
Currently, there is no good possibility to debug netlink traffic that
is being exchanged between kernel and user space. Therefore, this patch
implements a netlink virtual device, so that netlink messages will be
made visible to PF_PACKET sockets. Once there was an approach with a
similar idea [1], but it got forgotten somehow.
I think it makes most sense to accept the "overhead" of an extra netlink
net device over implementing the same functionality from PF_PACKET
sockets once again into netlink sockets. We have BPF filters that can
already be easily applied which even have netlink extensions, we have
RX_RING zero-copy between kernel- and user space that can be reused,
and much more features. So instead of re-implementing all of this, we
simply pass the skb to a given PF_PACKET socket for further analysis.
Another nice benefit that comes from that is that no code needs to be
changed in user space packet analyzers (maybe adding a dissector, but
not more), thus out of the box, we can already capture pcap files of
netlink traffic to debug/troubleshoot netlink problems.
Also thanks goes to Thomas Graf, Flavio Leitner, Jesper Dangaard Brouer.
[1] http://marc.info/?l=linux-netdev&m=113813401516110
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-21 17:38:08 +00:00
|
|
|
config NLMON
|
|
|
|
tristate "Virtual netlink monitoring device"
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
packet: nlmon: virtual netlink monitoring device for packet sockets
Currently, there is no good possibility to debug netlink traffic that
is being exchanged between kernel and user space. Therefore, this patch
implements a netlink virtual device, so that netlink messages will be
made visible to PF_PACKET sockets. Once there was an approach with a
similar idea [1], but it got forgotten somehow.
I think it makes most sense to accept the "overhead" of an extra netlink
net device over implementing the same functionality from PF_PACKET
sockets once again into netlink sockets. We have BPF filters that can
already be easily applied which even have netlink extensions, we have
RX_RING zero-copy between kernel- and user space that can be reused,
and much more features. So instead of re-implementing all of this, we
simply pass the skb to a given PF_PACKET socket for further analysis.
Another nice benefit that comes from that is that no code needs to be
changed in user space packet analyzers (maybe adding a dissector, but
not more), thus out of the box, we can already capture pcap files of
netlink traffic to debug/troubleshoot netlink problems.
Also thanks goes to Thomas Graf, Flavio Leitner, Jesper Dangaard Brouer.
[1] http://marc.info/?l=linux-netdev&m=113813401516110
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2013-06-21 17:38:08 +00:00
|
|
|
This option enables a monitoring net device for netlink skbs. The
|
|
|
|
purpose of this is to analyze netlink messages with packet sockets.
|
|
|
|
Thus applications like tcpdump will be able to see local netlink
|
|
|
|
messages if they tap into the netlink device, record pcaps for further
|
|
|
|
diagnostics, etc. This is mostly intended for developers or support
|
|
|
|
to debug netlink issues. If unsure, say N.
|
|
|
|
|
netkit, bpf: Add bpf programmable net device
This work adds a new, minimal BPF-programmable device called "netkit"
(former PoC code-name "meta") we recently presented at LSF/MM/BPF. The
core idea is that BPF programs are executed within the drivers xmit routine
and therefore e.g. in case of containers/Pods moving BPF processing closer
to the source.
One of the goals was that in case of Pod egress traffic, this allows to
move BPF programs from hostns tcx ingress into the device itself, providing
earlier drop or forward mechanisms, for example, if the BPF program
determines that the skb must be sent out of the node, then a redirect to
the physical device can take place directly without going through per-CPU
backlog queue. This helps to shift processing for such traffic from softirq
to process context, leading to better scheduling decisions/performance (see
measurements in the slides).
In this initial version, the netkit device ships as a pair, but we plan to
extend this further so it can also operate in single device mode. The pair
comes with a primary and a peer device. Only the primary device, typically
residing in hostns, can manage BPF programs for itself and its peer. The
peer device is designated for containers/Pods and cannot attach/detach
BPF programs. Upon the device creation, the user can set the default policy
to 'pass' or 'drop' for the case when no BPF program is attached.
Additionally, the device can be operated in L3 (default) or L2 mode. The
management of BPF programs is done via bpf_mprog, so that multi-attach is
supported right from the beginning with similar API and dependency controls
as tcx. For details on the latter see commit 053c8e1f235d ("bpf: Add generic
attach/detach/query API for multi-progs"). tc BPF compatibility is provided,
so that existing programs can be easily migrated.
Going forward, we plan to use netkit devices in Cilium as the main device
type for connecting Pods. They will be operated in L3 mode in order to
simplify a Pod's neighbor management and the peer will operate in default
drop mode, so that no traffic is leaving between the time when a Pod is
brought up by the CNI plugin and programs attached by the agent.
Additionally, the programs we attach via tcx on the physical devices are
using bpf_redirect_peer() for inbound traffic into netkit device, hence the
latter is also supporting the ndo_get_peer_dev callback. Similarly, we use
bpf_redirect_neigh() for the way out, pushing from netkit peer to phys device
directly. Also, BIG TCP is supported on netkit device. For the follow-up
work in single device mode, we plan to convert Cilium's cilium_host/_net
devices into a single one.
An extensive test suite for checking device operations and the BPF program
and link management API comes as BPF selftests in this series.
Co-developed-by: Nikolay Aleksandrov <razor@blackwall.org>
Signed-off-by: Nikolay Aleksandrov <razor@blackwall.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Stanislav Fomichev <sdf@google.com>
Acked-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://github.com/borkmann/iproute2/tree/pr/netkit
Link: http://vger.kernel.org/bpfconf2023_material/tcx_meta_netdev_borkmann.pdf (24ff.)
Link: https://lore.kernel.org/r/20231024214904.29825-2-daniel@iogearbox.net
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2023-10-24 21:48:58 +00:00
|
|
|
config NETKIT
|
|
|
|
bool "BPF-programmable network device"
|
|
|
|
depends on BPF_SYSCALL
|
|
|
|
help
|
|
|
|
The netkit device is a virtual networking device where BPF programs
|
|
|
|
can be attached to the device(s) transmission routine in order to
|
|
|
|
implement the driver's internal logic. The device can be configured
|
|
|
|
to operate in L3 or L2 mode. If unsure, say N.
|
|
|
|
|
2015-08-13 20:59:10 +00:00
|
|
|
config NET_VRF
|
|
|
|
tristate "Virtual Routing and Forwarding (Lite)"
|
2015-10-12 18:47:09 +00:00
|
|
|
depends on IP_MULTIPLE_TABLES
|
2015-09-30 03:07:12 +00:00
|
|
|
depends on NET_L3_MASTER_DEV
|
2015-10-12 18:47:09 +00:00
|
|
|
depends on IPV6 || IPV6=n
|
|
|
|
depends on IPV6_MULTIPLE_TABLES || IPV6=n
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2015-08-13 20:59:10 +00:00
|
|
|
This option enables the support for mapping interfaces into VRF's. The
|
|
|
|
support enables VRF devices.
|
|
|
|
|
2017-04-21 09:10:45 +00:00
|
|
|
config VSOCKMON
|
2019-11-21 13:28:28 +00:00
|
|
|
tristate "Virtual vsock monitoring device"
|
|
|
|
depends on VHOST_VSOCK
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2019-11-21 13:28:28 +00:00
|
|
|
This option enables a monitoring net device for vsock sockets. It is
|
|
|
|
mostly intended for developers or support to debug vsock issues. If
|
|
|
|
unsure, say N.
|
2017-04-21 09:10:45 +00:00
|
|
|
|
2020-11-03 17:23:54 +00:00
|
|
|
config MHI_NET
|
|
|
|
tristate "MHI network driver"
|
|
|
|
depends on MHI_BUS
|
|
|
|
help
|
|
|
|
This is the network driver for MHI bus. It can be used with
|
2021-08-03 13:36:29 +00:00
|
|
|
QCOM based WWAN modems for IP or QMAP/rmnet protocol (like SDX55).
|
|
|
|
Say Y or M.
|
2020-11-03 17:23:54 +00:00
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
endif # NET_CORE
|
|
|
|
|
|
|
|
config SUNGEM_PHY
|
|
|
|
tristate
|
|
|
|
|
|
|
|
source "drivers/net/arcnet/Kconfig"
|
|
|
|
|
|
|
|
source "drivers/atm/Kconfig"
|
|
|
|
|
|
|
|
source "drivers/net/caif/Kconfig"
|
|
|
|
|
2011-11-27 17:08:33 +00:00
|
|
|
source "drivers/net/dsa/Kconfig"
|
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
source "drivers/net/ethernet/Kconfig"
|
|
|
|
|
|
|
|
source "drivers/net/fddi/Kconfig"
|
|
|
|
|
2011-11-08 10:31:10 +00:00
|
|
|
source "drivers/net/hippi/Kconfig"
|
|
|
|
|
2020-03-06 04:28:29 +00:00
|
|
|
source "drivers/net/ipa/Kconfig"
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
config NET_SB1000
|
|
|
|
tristate "General Instruments Surfboard 1000"
|
2024-04-05 11:18:31 +00:00
|
|
|
depends on ISA && PNP
|
2020-06-13 16:50:22 +00:00
|
|
|
help
|
2005-04-16 22:20:36 +00:00
|
|
|
This is a driver for the General Instrument (also known as
|
|
|
|
NextLevel) SURFboard 1000 internal
|
|
|
|
cable modem. This is an ISA card which is used by a number of cable
|
|
|
|
TV companies to provide cable modem access. It's a one-way
|
|
|
|
downstream-only cable modem, meaning that your upstream net link is
|
|
|
|
provided by your regular phone modem.
|
|
|
|
|
|
|
|
At present this driver only compiles as a module, so say M here if
|
|
|
|
you have this card. The module will be called sb1000. Then read
|
2020-06-26 17:27:24 +00:00
|
|
|
<file:Documentation/networking/device_drivers/cable/sb1000.rst> for
|
2018-12-04 01:43:28 +00:00
|
|
|
information on how to use this module, as it needs special ppp
|
|
|
|
scripts for establishing a connection. Further documentation
|
|
|
|
and the necessary scripts can be found at:
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
<http://www.jacksonville.net/~fventuri/>
|
|
|
|
<http://home.adelphia.net/~siglercm/sb1000.html>
|
|
|
|
<http://linuxpower.cx/~cable/>
|
|
|
|
|
|
|
|
If you don't have this card, of course say N.
|
|
|
|
|
2005-07-30 23:31:23 +00:00
|
|
|
source "drivers/net/phy/Kconfig"
|
|
|
|
|
2022-10-03 06:51:57 +00:00
|
|
|
source "drivers/net/pse-pd/Kconfig"
|
|
|
|
|
2022-06-10 14:30:07 +00:00
|
|
|
source "drivers/net/can/Kconfig"
|
|
|
|
|
2021-07-29 02:20:43 +00:00
|
|
|
source "drivers/net/mctp/Kconfig"
|
|
|
|
|
2020-08-27 02:00:31 +00:00
|
|
|
source "drivers/net/mdio/Kconfig"
|
|
|
|
|
2020-08-27 02:00:28 +00:00
|
|
|
source "drivers/net/pcs/Kconfig"
|
|
|
|
|
2011-08-03 10:01:58 +00:00
|
|
|
source "drivers/net/plip/Kconfig"
|
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
source "drivers/net/ppp/Kconfig"
|
|
|
|
|
2011-08-03 10:17:13 +00:00
|
|
|
source "drivers/net/slip/Kconfig"
|
|
|
|
|
2011-08-23 07:42:10 +00:00
|
|
|
source "drivers/s390/net/Kconfig"
|
|
|
|
|
|
|
|
source "drivers/net/usb/Kconfig"
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
source "drivers/net/wireless/Kconfig"
|
|
|
|
|
|
|
|
source "drivers/net/wan/Kconfig"
|
|
|
|
|
2012-08-26 05:10:11 +00:00
|
|
|
source "drivers/net/ieee802154/Kconfig"
|
|
|
|
|
net: Add a WWAN subsystem
This change introduces initial support for a WWAN framework. Given the
complexity and heterogeneity of existing WWAN hardwares and interfaces,
there is no strict definition of what a WWAN device is and how it should
be represented. It's often a collection of multiple devices that perform
the global WWAN feature (netdev, tty, chardev, etc).
One usual way to expose modem controls and configuration is via high
level protocols such as the well known AT command protocol, MBIM or
QMI. The USB modems started to expose them as character devices, and
user daemons such as ModemManager learnt to use them.
This initial version adds the concept of WWAN port, which is a logical
pipe to a modem control protocol. The protocols are rawly exposed to
user via character device, allowing straigthforward support in existing
tools (ModemManager, ofono...). The WWAN core takes care of the generic
part, including character device management, and relies on port driver
operations to receive/submit protocol data.
Since the different devices exposing protocols for a same WWAN hardware
do not necessarily know about each others (e.g. two different USB
interfaces, PCI/MHI channel devices...) and can be created/removed in
different orders, the WWAN core ensures that all WAN ports contributing
to the 'whole' WWAN feature are grouped under the same virtual WWAN
device, relying on the provided parent device (e.g. mhi controller,
USB device). It's a 'trick' I copied from Johannes's earlier WWAN
subsystem proposal.
This initial version is purposely minimalist, it's essentially moving
the generic part of the previously proposed mhi_wwan_ctrl driver inside
a common WWAN framework, but the implementation is open and flexible
enough to allow extension for further drivers.
Signed-off-by: Loic Poulain <loic.poulain@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2021-04-16 08:36:33 +00:00
|
|
|
source "drivers/net/wwan/Kconfig"
|
|
|
|
|
2007-07-18 01:37:06 +00:00
|
|
|
config XEN_NETDEV_FRONTEND
|
|
|
|
tristate "Xen network device frontend driver"
|
|
|
|
depends on XEN
|
2009-03-27 23:28:34 +00:00
|
|
|
select XEN_XENBUS_FRONTEND
|
2020-06-29 13:13:28 +00:00
|
|
|
select PAGE_POOL
|
2007-07-18 01:37:06 +00:00
|
|
|
default y
|
|
|
|
help
|
2011-03-15 00:06:18 +00:00
|
|
|
This driver provides support for Xen paravirtual network
|
|
|
|
devices exported by a Xen network driver domain (often
|
|
|
|
domain 0).
|
|
|
|
|
|
|
|
The corresponding Linux backend driver is enabled by the
|
|
|
|
CONFIG_XEN_NETDEV_BACKEND option.
|
|
|
|
|
|
|
|
If you are compiling a kernel for use as Xen guest, you
|
|
|
|
should say Y here. To compile this driver as a module, chose
|
|
|
|
M here: the module will be called xen-netfront.
|
|
|
|
|
|
|
|
config XEN_NETDEV_BACKEND
|
|
|
|
tristate "Xen backend network device"
|
|
|
|
depends on XEN_BACKEND
|
|
|
|
help
|
|
|
|
This driver allows the kernel to act as a Xen network driver
|
|
|
|
domain which exports paravirtual network devices to other
|
|
|
|
Xen domains. These devices can be accessed by any operating
|
|
|
|
system that implements a compatible front end.
|
|
|
|
|
|
|
|
The corresponding Linux frontend driver is enabled by the
|
|
|
|
CONFIG_XEN_NETDEV_FRONTEND configuration option.
|
|
|
|
|
|
|
|
The backend driver presents a standard network device
|
|
|
|
endpoint for each paravirtual network device to the driver
|
|
|
|
domain network stack. These can then be bridged or routed
|
|
|
|
etc in order to provide full network connectivity.
|
|
|
|
|
|
|
|
If you are compiling a kernel to run in a Xen network driver
|
|
|
|
domain (often this is domain 0) you should say Y here. To
|
|
|
|
compile this driver as a module, chose M here: the module
|
|
|
|
will be called xen-netback.
|
2007-07-18 01:37:06 +00:00
|
|
|
|
2009-10-13 07:15:51 +00:00
|
|
|
config VMXNET3
|
2010-11-11 12:31:21 +00:00
|
|
|
tristate "VMware VMXNET3 ethernet driver"
|
|
|
|
depends on PCI && INET
|
2021-11-27 15:44:42 +00:00
|
|
|
depends on PAGE_SIZE_LESS_THAN_64KB
|
vmxnet3: Add XDP support.
The patch adds native-mode XDP support: XDP DROP, PASS, TX, and REDIRECT.
Background:
The vmxnet3 rx consists of three rings: ring0, ring1, and dataring.
For r0 and r1, buffers at r0 are allocated using alloc_skb APIs and dma
mapped to the ring's descriptor. If LRO is enabled and packet size larger
than 3K, VMXNET3_MAX_SKB_BUF_SIZE, then r1 is used to mapped the rest of
the buffer larger than VMXNET3_MAX_SKB_BUF_SIZE. Each buffer in r1 is
allocated using alloc_page. So for LRO packets, the payload will be in one
buffer from r0 and multiple from r1, for non-LRO packets, only one
descriptor in r0 is used for packet size less than 3k.
When receiving a packet, the first descriptor will have the sop (start of
packet) bit set, and the last descriptor will have the eop (end of packet)
bit set. Non-LRO packets will have only one descriptor with both sop and
eop set.
Other than r0 and r1, vmxnet3 dataring is specifically designed for
handling packets with small size, usually 128 bytes, defined in
VMXNET3_DEF_RXDATA_DESC_SIZE, by simply copying the packet from the backend
driver in ESXi to the ring's memory region at front-end vmxnet3 driver, in
order to avoid memory mapping/unmapping overhead. In summary, packet size:
A. < 128B: use dataring
B. 128B - 3K: use ring0 (VMXNET3_RX_BUF_SKB)
C. > 3K: use ring0 and ring1 (VMXNET3_RX_BUF_SKB + VMXNET3_RX_BUF_PAGE)
As a result, the patch adds XDP support for packets using dataring
and r0 (case A and B), not the large packet size when LRO is enabled.
XDP Implementation:
When user loads and XDP prog, vmxnet3 driver checks configurations, such
as mtu, lro, and re-allocate the rx buffer size for reserving the extra
headroom, XDP_PACKET_HEADROOM, for XDP frame. The XDP prog will then be
associated with every rx queue of the device. Note that when using dataring
for small packet size, vmxnet3 (front-end driver) doesn't control the
buffer allocation, as a result we allocate a new page and copy packet
from the dataring to XDP frame.
The receive side of XDP is implemented for case A and B, by invoking the
bpf program at vmxnet3_rq_rx_complete and handle its returned action.
The vmxnet3_process_xdp(), vmxnet3_process_xdp_small() function handles
the ring0 and dataring case separately, and decides the next journey of
the packet afterward.
For TX, vmxnet3 has split header design. Outgoing packets are parsed
first and protocol headers (L2/L3/L4) are copied to the backend. The
rest of the payload are dma mapped. Since XDP_TX does not parse the
packet protocol, the entire XDP frame is dma mapped for transmission
and transmitted in a batch. Later on, the frame is freed and recycled
back to the memory pool.
Performance:
Tested using two VMs inside one ESXi vSphere 7.0 machine, using single
core on each vmxnet3 device, sender using DPDK testpmd tx-mode attached
to vmxnet3 device, sending 64B or 512B UDP packet.
VM1 txgen:
$ dpdk-testpmd -l 0-3 -n 1 -- -i --nb-cores=3 \
--forward-mode=txonly --eth-peer=0,<mac addr of vm2>
option: add "--txonly-multi-flow"
option: use --txpkts=512 or 64 byte
VM2 running XDP:
$ ./samples/bpf/xdp_rxq_info -d ens160 -a <options> --skb-mode
$ ./samples/bpf/xdp_rxq_info -d ens160 -a <options>
options: XDP_DROP, XDP_PASS, XDP_TX
To test REDIRECT to cpu 0, use
$ ./samples/bpf/xdp_redirect_cpu -d ens160 -c 0 -e drop
Single core performance comparison with skb-mode.
64B: skb-mode -> native-mode
XDP_DROP: 1.6Mpps -> 2.4Mpps
XDP_PASS: 338Kpps -> 367Kpps
XDP_TX: 1.1Mpps -> 2.3Mpps
REDIRECT-drop: 1.3Mpps -> 2.3Mpps
512B: skb-mode -> native-mode
XDP_DROP: 863Kpps -> 1.3Mpps
XDP_PASS: 275Kpps -> 376Kpps
XDP_TX: 554Kpps -> 1.2Mpps
REDIRECT-drop: 659Kpps -> 1.2Mpps
Demo: https://youtu.be/4lm1CSCi78Q
Future work:
- XDP frag support
- use napi_consume_skb() instead of dev_kfree_skb_any at unmap
- stats using u64_stats_t
- using bitfield macro BIT()
- optimization for DMA synchronization using actual frame length,
instead of always max_len
Signed-off-by: William Tu <u9012063@gmail.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2023-08-10 04:13:04 +00:00
|
|
|
select PAGE_POOL
|
2010-11-11 12:31:21 +00:00
|
|
|
help
|
|
|
|
This driver supports VMware's vmxnet3 virtual ethernet NIC.
|
|
|
|
To compile this driver as a module, choose M here: the
|
|
|
|
module will be called vmxnet3.
|
2009-10-13 07:15:51 +00:00
|
|
|
|
2015-08-21 08:29:17 +00:00
|
|
|
config FUJITSU_ES
|
|
|
|
tristate "FUJITSU Extended Socket Network Device driver"
|
|
|
|
depends on ACPI
|
|
|
|
help
|
|
|
|
This driver provides support for Extended Socket network device
|
2019-09-23 15:52:43 +00:00
|
|
|
on Extended Partitioning of FUJITSU PRIMEQUEST 2000 E2 series.
|
2015-08-21 08:29:17 +00:00
|
|
|
|
2023-01-11 06:26:31 +00:00
|
|
|
source "drivers/net/thunderbolt/Kconfig"
|
2011-11-28 21:35:35 +00:00
|
|
|
source "drivers/net/hyperv/Kconfig"
|
|
|
|
|
2017-12-01 23:08:58 +00:00
|
|
|
config NETDEVSIM
|
|
|
|
tristate "Simulated networking device"
|
|
|
|
depends on DEBUG_FS
|
2020-01-16 13:14:04 +00:00
|
|
|
depends on INET
|
2020-01-14 11:23:15 +00:00
|
|
|
depends on IPV6 || IPV6=n
|
2021-03-14 12:19:32 +00:00
|
|
|
depends on PSAMPLE || PSAMPLE=n
|
2023-08-07 19:33:20 +00:00
|
|
|
depends on PTP_1588_CLOCK_MOCK || PTP_1588_CLOCK_MOCK=n
|
2019-03-24 10:14:38 +00:00
|
|
|
select NET_DEVLINK
|
2024-04-16 23:21:37 +00:00
|
|
|
select PAGE_POOL
|
2024-10-09 08:09:57 +00:00
|
|
|
select NET_SHAPER
|
2017-12-01 23:08:58 +00:00
|
|
|
help
|
|
|
|
This driver is a developer testing tool and software model that can
|
|
|
|
be used to test various control path networking APIs, especially
|
|
|
|
HW-offload related.
|
|
|
|
|
|
|
|
To compile this driver as a module, choose M here: the module
|
|
|
|
will be called netdevsim.
|
|
|
|
|
2018-05-24 16:55:15 +00:00
|
|
|
config NET_FAILOVER
|
|
|
|
tristate "Failover driver"
|
|
|
|
select FAILOVER
|
|
|
|
help
|
|
|
|
This provides an automated failover mechanism via APIs to create
|
|
|
|
and destroy a failover master netdev and manages a primary and
|
|
|
|
standby slave netdevs that get registered via the generic failover
|
|
|
|
infrastructure. This can be used by paravirtual drivers to enable
|
2019-01-17 17:02:18 +00:00
|
|
|
an alternate low latency datapath. It also enables live migration of
|
2018-05-24 16:55:15 +00:00
|
|
|
a VM with direct attached VF by failing over to the paravirtual
|
|
|
|
datapath when the VF is unplugged.
|
|
|
|
|
2021-08-03 11:40:47 +00:00
|
|
|
config NETDEV_LEGACY_INIT
|
|
|
|
bool
|
|
|
|
depends on ISA
|
|
|
|
help
|
|
|
|
Drivers that call netdev_boot_setup_check() should select this
|
|
|
|
symbol, everything else no longer needs it.
|
|
|
|
|
2007-06-13 19:48:53 +00:00
|
|
|
endif # NETDEVICES
|