Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.12-rc8).

Conflicts:

tools/testing/selftests/net/.gitignore
  252e01e682 ("selftests: net: add netlink-dumps to .gitignore")
  be43a6b238 ("selftests: ncdevmem: Move ncdevmem under drivers/net/hw")
https://lore.kernel.org/all/20241113122359.1b95180a@canb.auug.org.au/

drivers/net/phy/phylink.c
  671154f174 ("net: phylink: ensure PHY momentary link-fails are handled")
  7530ea26c8 ("net: phylink: remove "using_mac_select_pcs"")

Adjacent changes:

drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
  5b366eae71 ("stmmac: dwmac-intel-plat: fix call balance of tx_clk handling routines")
  e96321fad3 ("net: ethernet: Switch back to struct platform_driver::remove()")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2024-11-14 11:27:36 -08:00
commit a79993b5fc
282 changed files with 2596 additions and 1276 deletions

View File

@ -665,6 +665,7 @@ Tomeu Vizoso <tomeu@tomeuvizoso.net> <tomeu.vizoso@collabora.com>
Thomas Graf <tgraf@suug.ch> Thomas Graf <tgraf@suug.ch>
Thomas Körper <socketcan@esd.eu> <thomas.koerper@esd.eu> Thomas Körper <socketcan@esd.eu> <thomas.koerper@esd.eu>
Thomas Pedersen <twp@codeaurora.org> Thomas Pedersen <twp@codeaurora.org>
Thorsten Blum <thorsten.blum@linux.dev> <thorsten.blum@toblux.com>
Tiezhu Yang <yangtiezhu@loongson.cn> <kernelpatch@126.com> Tiezhu Yang <yangtiezhu@loongson.cn> <kernelpatch@126.com>
Tingwei Zhang <quic_tingwei@quicinc.com> <tingwei@codeaurora.org> Tingwei Zhang <quic_tingwei@quicinc.com> <tingwei@codeaurora.org>
Tirupathi Reddy <quic_tirupath@quicinc.com> <tirupath@codeaurora.org> Tirupathi Reddy <quic_tirupath@quicinc.com> <tirupath@codeaurora.org>

View File

@ -1599,6 +1599,15 @@ The following nested keys are defined.
pglazyfreed (npn) pglazyfreed (npn)
Amount of reclaimed lazyfree pages Amount of reclaimed lazyfree pages
swpin_zero
Number of pages swapped into memory and filled with zero, where I/O
was optimized out because the page content was detected to be zero
during swapout.
swpout_zero
Number of zero-filled pages swapped out with I/O skipped due to the
content being detected as zero.
zswpin zswpin
Number of pages moved in to memory from zswap. Number of pages moved in to memory from zswap.

View File

@ -6689,7 +6689,7 @@
0: no polling (default) 0: no polling (default)
thp_anon= [KNL] thp_anon= [KNL]
Format: <size>,<size>[KMG]:<state>;<size>-<size>[KMG]:<state> Format: <size>[KMG],<size>[KMG]:<state>;<size>[KMG]-<size>[KMG]:<state>
state is one of "always", "madvise", "never" or "inherit". state is one of "always", "madvise", "never" or "inherit".
Control the default behavior of the system with respect Control the default behavior of the system with respect
to anonymous transparent hugepages. to anonymous transparent hugepages.
@ -6728,6 +6728,15 @@
torture.verbose_sleep_duration= [KNL] torture.verbose_sleep_duration= [KNL]
Duration of each verbose-printk() sleep in jiffies. Duration of each verbose-printk() sleep in jiffies.
tpm.disable_pcr_integrity= [HW,TPM]
Do not protect PCR registers from unintended physical
access, or interposers in the bus by the means of
having an integrity protected session wrapped around
TPM2_PCR_Extend command. Consider this in a situation
where TPM is heavily utilized by IMA, thus protection
causing a major performance hit, and the space where
machines are deployed is by other means guarded.
tpm_suspend_pcr=[HW,TPM] tpm_suspend_pcr=[HW,TPM]
Format: integer pcr id Format: integer pcr id
Specify that at suspend time, the tpm driver Specify that at suspend time, the tpm driver

View File

@ -303,7 +303,7 @@ control by passing the parameter ``transparent_hugepage=always`` or
kernel command line. kernel command line.
Alternatively, each supported anonymous THP size can be controlled by Alternatively, each supported anonymous THP size can be controlled by
passing ``thp_anon=<size>,<size>[KMG]:<state>;<size>-<size>[KMG]:<state>``, passing ``thp_anon=<size>[KMG],<size>[KMG]:<state>;<size>[KMG]-<size>[KMG]:<state>``,
where ``<size>`` is the THP size (must be a power of 2 of PAGE_SIZE and where ``<size>`` is the THP size (must be a power of 2 of PAGE_SIZE and
supported anonymous THP) and ``<state>`` is one of ``always``, ``madvise``, supported anonymous THP) and ``<state>`` is one of ``always``, ``madvise``,
``never`` or ``inherit``. ``never`` or ``inherit``.

View File

@ -225,6 +225,15 @@ The user must ensure the tokens are returned to the kernel in a timely manner.
Failure to do so will exhaust the limited dmabuf that is bound to the RX queue Failure to do so will exhaust the limited dmabuf that is bound to the RX queue
and will lead to packet drops. and will lead to packet drops.
The user must pass no more than 128 tokens, with no more than 1024 total frags
among the token->token_count across all the tokens. If the user provides more
than 1024 frags, the kernel will free up to 1024 frags and return early.
The kernel returns the number of actual frags freed. The number of frags freed
can be less than the tokens provided by the user in case of:
(a) an internal kernel leak bug.
(b) the user passed more than 1024 frags.
Implementation & Caveats Implementation & Caveats
======================== ========================

View File

@ -11,18 +11,18 @@ Landlock LSM: kernel documentation
Landlock's goal is to create scoped access-control (i.e. sandboxing). To Landlock's goal is to create scoped access-control (i.e. sandboxing). To
harden a whole system, this feature should be available to any process, harden a whole system, this feature should be available to any process,
including unprivileged ones. Because such process may be compromised or including unprivileged ones. Because such a process may be compromised or
backdoored (i.e. untrusted), Landlock's features must be safe to use from the backdoored (i.e. untrusted), Landlock's features must be safe to use from the
kernel and other processes point of view. Landlock's interface must therefore kernel and other processes point of view. Landlock's interface must therefore
expose a minimal attack surface. expose a minimal attack surface.
Landlock is designed to be usable by unprivileged processes while following the Landlock is designed to be usable by unprivileged processes while following the
system security policy enforced by other access control mechanisms (e.g. DAC, system security policy enforced by other access control mechanisms (e.g. DAC,
LSM). Indeed, a Landlock rule shall not interfere with other access-controls LSM). A Landlock rule shall not interfere with other access-controls enforced
enforced on the system, only add more restrictions. on the system, only add more restrictions.
Any user can enforce Landlock rulesets on their processes. They are merged and Any user can enforce Landlock rulesets on their processes. They are merged and
evaluated according to the inherited ones in a way that ensures that only more evaluated against inherited rulesets in a way that ensures that only more
constraints can be added. constraints can be added.
User space documentation can be found here: User space documentation can be found here:
@ -43,7 +43,7 @@ Guiding principles for safe access controls
only impact the processes requesting them. only impact the processes requesting them.
* Resources (e.g. file descriptors) directly obtained from the kernel by a * Resources (e.g. file descriptors) directly obtained from the kernel by a
sandboxed process shall retain their scoped accesses (at the time of resource sandboxed process shall retain their scoped accesses (at the time of resource
acquisition) whatever process use them. acquisition) whatever process uses them.
Cf. `File descriptor access rights`_. Cf. `File descriptor access rights`_.
Design choices Design choices
@ -71,7 +71,7 @@ the same results, when they are executed under the same Landlock domain.
Taking the ``LANDLOCK_ACCESS_FS_TRUNCATE`` right as an example, it may be Taking the ``LANDLOCK_ACCESS_FS_TRUNCATE`` right as an example, it may be
allowed to open a file for writing without being allowed to allowed to open a file for writing without being allowed to
:manpage:`ftruncate` the resulting file descriptor if the related file :manpage:`ftruncate` the resulting file descriptor if the related file
hierarchy doesn't grant such access right. The following sequences of hierarchy doesn't grant that access right. The following sequences of
operations have the same semantic and should then have the same result: operations have the same semantic and should then have the same result:
* ``truncate(path);`` * ``truncate(path);``
@ -81,7 +81,7 @@ Similarly to file access modes (e.g. ``O_RDWR``), Landlock access rights
attached to file descriptors are retained even if they are passed between attached to file descriptors are retained even if they are passed between
processes (e.g. through a Unix domain socket). Such access rights will then be processes (e.g. through a Unix domain socket). Such access rights will then be
enforced even if the receiving process is not sandboxed by Landlock. Indeed, enforced even if the receiving process is not sandboxed by Landlock. Indeed,
this is required to keep a consistent access control over the whole system, and this is required to keep access controls consistent over the whole system, and
this avoids unattended bypasses through file descriptor passing (i.e. confused this avoids unattended bypasses through file descriptor passing (i.e. confused
deputy attack). deputy attack).

View File

@ -8,13 +8,13 @@ Landlock: unprivileged access control
===================================== =====================================
:Author: Mickaël Salaün :Author: Mickaël Salaün
:Date: September 2024 :Date: October 2024
The goal of Landlock is to enable to restrict ambient rights (e.g. global The goal of Landlock is to enable restriction of ambient rights (e.g. global
filesystem or network access) for a set of processes. Because Landlock filesystem or network access) for a set of processes. Because Landlock
is a stackable LSM, it makes possible to create safe security sandboxes as new is a stackable LSM, it makes it possible to create safe security sandboxes as
security layers in addition to the existing system-wide access-controls. This new security layers in addition to the existing system-wide access-controls.
kind of sandbox is expected to help mitigate the security impact of bugs or This kind of sandbox is expected to help mitigate the security impact of bugs or
unexpected/malicious behaviors in user space applications. Landlock empowers unexpected/malicious behaviors in user space applications. Landlock empowers
any process, including unprivileged ones, to securely restrict themselves. any process, including unprivileged ones, to securely restrict themselves.
@ -86,8 +86,8 @@ to be explicit about the denied-by-default access rights.
LANDLOCK_SCOPE_SIGNAL, LANDLOCK_SCOPE_SIGNAL,
}; };
Because we may not know on which kernel version an application will be Because we may not know which kernel version an application will be executed
executed, it is safer to follow a best-effort security approach. Indeed, we on, it is safer to follow a best-effort security approach. Indeed, we
should try to protect users as much as possible whatever the kernel they are should try to protect users as much as possible whatever the kernel they are
using. using.
@ -129,7 +129,7 @@ version, and only use the available subset of access rights:
LANDLOCK_SCOPE_SIGNAL); LANDLOCK_SCOPE_SIGNAL);
} }
This enables to create an inclusive ruleset that will contain our rules. This enables the creation of an inclusive ruleset that will contain our rules.
.. code-block:: c .. code-block:: c
@ -219,42 +219,41 @@ If the ``landlock_restrict_self`` system call succeeds, the current thread is
now restricted and this policy will be enforced on all its subsequently created now restricted and this policy will be enforced on all its subsequently created
children as well. Once a thread is landlocked, there is no way to remove its children as well. Once a thread is landlocked, there is no way to remove its
security policy; only adding more restrictions is allowed. These threads are security policy; only adding more restrictions is allowed. These threads are
now in a new Landlock domain, merge of their parent one (if any) with the new now in a new Landlock domain, which is a merger of their parent one (if any)
ruleset. with the new ruleset.
Full working code can be found in `samples/landlock/sandboxer.c`_. Full working code can be found in `samples/landlock/sandboxer.c`_.
Good practices Good practices
-------------- --------------
It is recommended setting access rights to file hierarchy leaves as much as It is recommended to set access rights to file hierarchy leaves as much as
possible. For instance, it is better to be able to have ``~/doc/`` as a possible. For instance, it is better to be able to have ``~/doc/`` as a
read-only hierarchy and ``~/tmp/`` as a read-write hierarchy, compared to read-only hierarchy and ``~/tmp/`` as a read-write hierarchy, compared to
``~/`` as a read-only hierarchy and ``~/tmp/`` as a read-write hierarchy. ``~/`` as a read-only hierarchy and ``~/tmp/`` as a read-write hierarchy.
Following this good practice leads to self-sufficient hierarchies that do not Following this good practice leads to self-sufficient hierarchies that do not
depend on their location (i.e. parent directories). This is particularly depend on their location (i.e. parent directories). This is particularly
relevant when we want to allow linking or renaming. Indeed, having consistent relevant when we want to allow linking or renaming. Indeed, having consistent
access rights per directory enables to change the location of such directory access rights per directory enables changing the location of such directories
without relying on the destination directory access rights (except those that without relying on the destination directory access rights (except those that
are required for this operation, see ``LANDLOCK_ACCESS_FS_REFER`` are required for this operation, see ``LANDLOCK_ACCESS_FS_REFER``
documentation). documentation).
Having self-sufficient hierarchies also helps to tighten the required access Having self-sufficient hierarchies also helps to tighten the required access
rights to the minimal set of data. This also helps avoid sinkhole directories, rights to the minimal set of data. This also helps avoid sinkhole directories,
i.e. directories where data can be linked to but not linked from. However, i.e. directories where data can be linked to but not linked from. However,
this depends on data organization, which might not be controlled by developers. this depends on data organization, which might not be controlled by developers.
In this case, granting read-write access to ``~/tmp/``, instead of write-only In this case, granting read-write access to ``~/tmp/``, instead of write-only
access, would potentially allow to move ``~/tmp/`` to a non-readable directory access, would potentially allow moving ``~/tmp/`` to a non-readable directory
and still keep the ability to list the content of ``~/tmp/``. and still keep the ability to list the content of ``~/tmp/``.
Layers of file path access rights Layers of file path access rights
--------------------------------- ---------------------------------
Each time a thread enforces a ruleset on itself, it updates its Landlock domain Each time a thread enforces a ruleset on itself, it updates its Landlock domain
with a new layer of policy. Indeed, this complementary policy is stacked with with a new layer of policy. This complementary policy is stacked with any
the potentially other rulesets already restricting this thread. A sandboxed other rulesets potentially already restricting this thread. A sandboxed thread
thread can then safely add more constraints to itself with a new enforced can then safely add more constraints to itself with a new enforced ruleset.
ruleset.
One policy layer grants access to a file path if at least one of its rules One policy layer grants access to a file path if at least one of its rules
encountered on the path grants the access. A sandboxed thread can only access encountered on the path grants the access. A sandboxed thread can only access
@ -265,7 +264,7 @@ etc.).
Bind mounts and OverlayFS Bind mounts and OverlayFS
------------------------- -------------------------
Landlock enables to restrict access to file hierarchies, which means that these Landlock enables restricting access to file hierarchies, which means that these
access rights can be propagated with bind mounts (cf. access rights can be propagated with bind mounts (cf.
Documentation/filesystems/sharedsubtree.rst) but not with Documentation/filesystems/sharedsubtree.rst) but not with
Documentation/filesystems/overlayfs.rst. Documentation/filesystems/overlayfs.rst.
@ -278,21 +277,21 @@ access to multiple file hierarchies at the same time, whether these hierarchies
are the result of bind mounts or not. are the result of bind mounts or not.
An OverlayFS mount point consists of upper and lower layers. These layers are An OverlayFS mount point consists of upper and lower layers. These layers are
combined in a merge directory, result of the mount point. This merge hierarchy combined in a merge directory, and that merged directory becomes available at
may include files from the upper and lower layers, but modifications performed the mount point. This merge hierarchy may include files from the upper and
on the merge hierarchy only reflects on the upper layer. From a Landlock lower layers, but modifications performed on the merge hierarchy only reflect
policy point of view, each OverlayFS layers and merge hierarchies are on the upper layer. From a Landlock policy point of view, all OverlayFS layers
standalone and contains their own set of files and directories, which is and merge hierarchies are standalone and each contains their own set of files
different from bind mounts. A policy restricting an OverlayFS layer will not and directories, which is different from bind mounts. A policy restricting an
restrict the resulted merged hierarchy, and vice versa. Landlock users should OverlayFS layer will not restrict the resulted merged hierarchy, and vice versa.
then only think about file hierarchies they want to allow access to, regardless Landlock users should then only think about file hierarchies they want to allow
of the underlying filesystem. access to, regardless of the underlying filesystem.
Inheritance Inheritance
----------- -----------
Every new thread resulting from a :manpage:`clone(2)` inherits Landlock domain Every new thread resulting from a :manpage:`clone(2)` inherits Landlock domain
restrictions from its parent. This is similar to the seccomp inheritance (cf. restrictions from its parent. This is similar to seccomp inheritance (cf.
Documentation/userspace-api/seccomp_filter.rst) or any other LSM dealing with Documentation/userspace-api/seccomp_filter.rst) or any other LSM dealing with
task's :manpage:`credentials(7)`. For instance, one process's thread may apply task's :manpage:`credentials(7)`. For instance, one process's thread may apply
Landlock rules to itself, but they will not be automatically applied to other Landlock rules to itself, but they will not be automatically applied to other
@ -311,8 +310,8 @@ Ptrace restrictions
A sandboxed process has less privileges than a non-sandboxed process and must A sandboxed process has less privileges than a non-sandboxed process and must
then be subject to additional restrictions when manipulating another process. then be subject to additional restrictions when manipulating another process.
To be allowed to use :manpage:`ptrace(2)` and related syscalls on a target To be allowed to use :manpage:`ptrace(2)` and related syscalls on a target
process, a sandboxed process should have a subset of the target process rules, process, a sandboxed process should have a superset of the target process's
which means the tracee must be in a sub-domain of the tracer. access rights, which means the tracee must be in a sub-domain of the tracer.
IPC scoping IPC scoping
----------- -----------
@ -322,7 +321,7 @@ interactions between sandboxes. Each Landlock domain can be explicitly scoped
for a set of actions by specifying it on a ruleset. For example, if a for a set of actions by specifying it on a ruleset. For example, if a
sandboxed process should not be able to :manpage:`connect(2)` to a sandboxed process should not be able to :manpage:`connect(2)` to a
non-sandboxed process through abstract :manpage:`unix(7)` sockets, we can non-sandboxed process through abstract :manpage:`unix(7)` sockets, we can
specify such restriction with ``LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET``. specify such a restriction with ``LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET``.
Moreover, if a sandboxed process should not be able to send a signal to a Moreover, if a sandboxed process should not be able to send a signal to a
non-sandboxed process, we can specify this restriction with non-sandboxed process, we can specify this restriction with
``LANDLOCK_SCOPE_SIGNAL``. ``LANDLOCK_SCOPE_SIGNAL``.
@ -394,7 +393,7 @@ Backward and forward compatibility
Landlock is designed to be compatible with past and future versions of the Landlock is designed to be compatible with past and future versions of the
kernel. This is achieved thanks to the system call attributes and the kernel. This is achieved thanks to the system call attributes and the
associated bitflags, particularly the ruleset's ``handled_access_fs``. Making associated bitflags, particularly the ruleset's ``handled_access_fs``. Making
handled access right explicit enables the kernel and user space to have a clear handled access rights explicit enables the kernel and user space to have a clear
contract with each other. This is required to make sure sandboxing will not contract with each other. This is required to make sure sandboxing will not
get stricter with a system update, which could break applications. get stricter with a system update, which could break applications.
@ -563,33 +562,34 @@ always allowed when using a kernel that only supports the first or second ABI.
Starting with the Landlock ABI version 3, it is now possible to securely control Starting with the Landlock ABI version 3, it is now possible to securely control
truncation thanks to the new ``LANDLOCK_ACCESS_FS_TRUNCATE`` access right. truncation thanks to the new ``LANDLOCK_ACCESS_FS_TRUNCATE`` access right.
Network support (ABI < 4) TCP bind and connect (ABI < 4)
------------------------- ------------------------------
Starting with the Landlock ABI version 4, it is now possible to restrict TCP Starting with the Landlock ABI version 4, it is now possible to restrict TCP
bind and connect actions to only a set of allowed ports thanks to the new bind and connect actions to only a set of allowed ports thanks to the new
``LANDLOCK_ACCESS_NET_BIND_TCP`` and ``LANDLOCK_ACCESS_NET_CONNECT_TCP`` ``LANDLOCK_ACCESS_NET_BIND_TCP`` and ``LANDLOCK_ACCESS_NET_CONNECT_TCP``
access rights. access rights.
IOCTL (ABI < 5) Device IOCTL (ABI < 5)
--------------- ----------------------
IOCTL operations could not be denied before the fifth Landlock ABI, so IOCTL operations could not be denied before the fifth Landlock ABI, so
:manpage:`ioctl(2)` is always allowed when using a kernel that only supports an :manpage:`ioctl(2)` is always allowed when using a kernel that only supports an
earlier ABI. earlier ABI.
Starting with the Landlock ABI version 5, it is possible to restrict the use of Starting with the Landlock ABI version 5, it is possible to restrict the use of
:manpage:`ioctl(2)` using the new ``LANDLOCK_ACCESS_FS_IOCTL_DEV`` right. :manpage:`ioctl(2)` on character and block devices using the new
``LANDLOCK_ACCESS_FS_IOCTL_DEV`` right.
Abstract UNIX socket scoping (ABI < 6) Abstract UNIX socket (ABI < 6)
-------------------------------------- ------------------------------
Starting with the Landlock ABI version 6, it is possible to restrict Starting with the Landlock ABI version 6, it is possible to restrict
connections to an abstract :manpage:`unix(7)` socket by setting connections to an abstract :manpage:`unix(7)` socket by setting
``LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET`` to the ``scoped`` ruleset attribute. ``LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET`` to the ``scoped`` ruleset attribute.
Signal scoping (ABI < 6) Signal (ABI < 6)
------------------------ ----------------
Starting with the Landlock ABI version 6, it is possible to restrict Starting with the Landlock ABI version 6, it is possible to restrict
:manpage:`signal(7)` sending by setting ``LANDLOCK_SCOPE_SIGNAL`` to the :manpage:`signal(7)` sending by setting ``LANDLOCK_SCOPE_SIGNAL`` to the
@ -605,9 +605,9 @@ Build time configuration
Landlock was first introduced in Linux 5.13 but it must be configured at build Landlock was first introduced in Linux 5.13 but it must be configured at build
time with ``CONFIG_SECURITY_LANDLOCK=y``. Landlock must also be enabled at boot time with ``CONFIG_SECURITY_LANDLOCK=y``. Landlock must also be enabled at boot
time as the other security modules. The list of security modules enabled by time like other security modules. The list of security modules enabled by
default is set with ``CONFIG_LSM``. The kernel configuration should then default is set with ``CONFIG_LSM``. The kernel configuration should then
contains ``CONFIG_LSM=landlock,[...]`` with ``[...]`` as the list of other contain ``CONFIG_LSM=landlock,[...]`` with ``[...]`` as the list of other
potentially useful security modules for the running system (see the potentially useful security modules for the running system (see the
``CONFIG_LSM`` help). ``CONFIG_LSM`` help).
@ -669,7 +669,7 @@ Questions and answers
What about user space sandbox managers? What about user space sandbox managers?
--------------------------------------- ---------------------------------------
Using user space process to enforce restrictions on kernel resources can lead Using user space processes to enforce restrictions on kernel resources can lead
to race conditions or inconsistent evaluations (i.e. `Incorrect mirroring of to race conditions or inconsistent evaluations (i.e. `Incorrect mirroring of
the OS code and state the OS code and state
<https://www.ndss-symposium.org/ndss2003/traps-and-pitfalls-practical-problems-system-call-interposition-based-security-tools/>`_). <https://www.ndss-symposium.org/ndss2003/traps-and-pitfalls-practical-problems-system-call-interposition-based-security-tools/>`_).

View File

@ -1174,8 +1174,9 @@ F: Documentation/hid/amd-sfh*
F: drivers/hid/amd-sfh-hid/ F: drivers/hid/amd-sfh-hid/
AMD SPI DRIVER AMD SPI DRIVER
M: Sanjay R Mehta <sanju.mehta@amd.com> M: Raju Rangoju <Raju.Rangoju@amd.com>
S: Maintained L: linux-spi@vger.kernel.org
S: Supported
F: drivers/spi/spi-amd.c F: drivers/spi/spi-amd.c
AMD XGBE DRIVER AMD XGBE DRIVER
@ -19609,6 +19610,17 @@ S: Supported
F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.yaml F: Documentation/devicetree/bindings/i2c/renesas,iic-emev2.yaml
F: drivers/i2c/busses/i2c-emev2.c F: drivers/i2c/busses/i2c-emev2.c
RENESAS ETHERNET AVB DRIVER
M: Paul Barker <paul.barker.ct@bp.renesas.com>
M: Niklas Söderlund <niklas.soderlund@ragnatech.se>
L: netdev@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/net/renesas,etheravb.yaml
F: drivers/net/ethernet/renesas/Kconfig
F: drivers/net/ethernet/renesas/Makefile
F: drivers/net/ethernet/renesas/ravb*
RENESAS ETHERNET SWITCH DRIVER RENESAS ETHERNET SWITCH DRIVER
R: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> R: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
@ -19658,6 +19670,14 @@ F: Documentation/devicetree/bindings/i2c/renesas,rmobile-iic.yaml
F: drivers/i2c/busses/i2c-rcar.c F: drivers/i2c/busses/i2c-rcar.c
F: drivers/i2c/busses/i2c-sh_mobile.c F: drivers/i2c/busses/i2c-sh_mobile.c
RENESAS R-CAR SATA DRIVER
M: Geert Uytterhoeven <geert+renesas@glider.be>
L: linux-ide@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/ata/renesas,rcar-sata.yaml
F: drivers/ata/sata_rcar.c
RENESAS R-CAR THERMAL DRIVERS RENESAS R-CAR THERMAL DRIVERS
M: Niklas Söderlund <niklas.soderlund@ragnatech.se> M: Niklas Söderlund <niklas.soderlund@ragnatech.se>
L: linux-renesas-soc@vger.kernel.org L: linux-renesas-soc@vger.kernel.org
@ -19733,6 +19753,17 @@ S: Supported
F: Documentation/devicetree/bindings/i2c/renesas,rzv2m.yaml F: Documentation/devicetree/bindings/i2c/renesas,rzv2m.yaml
F: drivers/i2c/busses/i2c-rzv2m.c F: drivers/i2c/busses/i2c-rzv2m.c
RENESAS SUPERH ETHERNET DRIVER
M: Niklas Söderlund <niklas.soderlund@ragnatech.se>
L: netdev@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
S: Supported
F: Documentation/devicetree/bindings/net/renesas,ether.yaml
F: drivers/net/ethernet/renesas/Kconfig
F: drivers/net/ethernet/renesas/Makefile
F: drivers/net/ethernet/renesas/sh_eth*
F: include/linux/sh_eth.h
RENESAS USB PHY DRIVER RENESAS USB PHY DRIVER
M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> M: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
L: linux-renesas-soc@vger.kernel.org L: linux-renesas-soc@vger.kernel.org
@ -21655,6 +21686,15 @@ S: Supported
W: https://github.com/thesofproject/linux/ W: https://github.com/thesofproject/linux/
F: sound/soc/sof/ F: sound/soc/sof/
SOUND - GENERIC SOUND CARD (Simple-Audio-Card, Audio-Graph-Card)
M: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
S: Supported
L: linux-sound@vger.kernel.org
F: sound/soc/generic/
F: include/sound/simple_card*
F: Documentation/devicetree/bindings/sound/simple-card.yaml
F: Documentation/devicetree/bindings/sound/audio-graph*.yaml
SOUNDWIRE SUBSYSTEM SOUNDWIRE SUBSYSTEM
M: Vinod Koul <vkoul@kernel.org> M: Vinod Koul <vkoul@kernel.org>
M: Bard Liao <yung-chuan.liao@linux.intel.com> M: Bard Liao <yung-chuan.liao@linux.intel.com>

View File

@ -2,7 +2,7 @@
VERSION = 6 VERSION = 6
PATCHLEVEL = 12 PATCHLEVEL = 12
SUBLEVEL = 0 SUBLEVEL = 0
EXTRAVERSION = -rc6 EXTRAVERSION = -rc7
NAME = Baby Opossum Posse NAME = Baby Opossum Posse
# *DOCUMENTATION* # *DOCUMENTATION*

View File

@ -2214,6 +2214,7 @@ config ARM64_SME
bool "ARM Scalable Matrix Extension support" bool "ARM Scalable Matrix Extension support"
default y default y
depends on ARM64_SVE depends on ARM64_SVE
depends on BROKEN
help help
The Scalable Matrix Extension (SME) is an extension to the AArch64 The Scalable Matrix Extension (SME) is an extension to the AArch64
execution state which utilises a substantial subset of the SVE execution state which utilises a substantial subset of the SVE

View File

@ -6,6 +6,8 @@
#ifndef BUILD_VDSO #ifndef BUILD_VDSO
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/fs.h>
#include <linux/shmem_fs.h>
#include <linux/types.h> #include <linux/types.h>
static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
@ -31,19 +33,21 @@ static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
} }
#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey) #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags) static inline unsigned long arch_calc_vm_flag_bits(struct file *file,
unsigned long flags)
{ {
/* /*
* Only allow MTE on anonymous mappings as these are guaranteed to be * Only allow MTE on anonymous mappings as these are guaranteed to be
* backed by tags-capable memory. The vm_flags may be overridden by a * backed by tags-capable memory. The vm_flags may be overridden by a
* filesystem supporting MTE (RAM-based). * filesystem supporting MTE (RAM-based).
*/ */
if (system_supports_mte() && (flags & MAP_ANONYMOUS)) if (system_supports_mte() &&
((flags & MAP_ANONYMOUS) || shmem_file(file)))
return VM_MTE_ALLOWED; return VM_MTE_ALLOWED;
return 0; return 0;
} }
#define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags) #define arch_calc_vm_flag_bits(file, flags) arch_calc_vm_flag_bits(file, flags)
static inline bool arch_validate_prot(unsigned long prot, static inline bool arch_validate_prot(unsigned long prot,
unsigned long addr __always_unused) unsigned long addr __always_unused)

View File

@ -26,10 +26,6 @@ void update_freq_counters_refs(void);
#define arch_scale_freq_invariant topology_scale_freq_invariant #define arch_scale_freq_invariant topology_scale_freq_invariant
#define arch_scale_freq_ref topology_get_freq_ref #define arch_scale_freq_ref topology_get_freq_ref
#ifdef CONFIG_ACPI_CPPC_LIB
#define arch_init_invariance_cppc topology_init_cpu_capacity_cppc
#endif
/* Replace task scheduler's default cpu-invariant accounting */ /* Replace task scheduler's default cpu-invariant accounting */
#define arch_scale_cpu_capacity topology_get_cpu_scale #define arch_scale_cpu_capacity topology_get_cpu_scale

View File

@ -1367,6 +1367,7 @@ static void sve_init_regs(void)
} else { } else {
fpsimd_to_sve(current); fpsimd_to_sve(current);
current->thread.fp_type = FP_STATE_SVE; current->thread.fp_type = FP_STATE_SVE;
fpsimd_flush_task_state(current);
} }
} }

View File

@ -7,48 +7,19 @@
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/thread_info.h>
/*
* If we have SMCCC v1.3 and (as is likely) no SVE state in
* the registers then set the SMCCC hint bit to say there's no
* need to preserve it. Do this by directly adjusting the SMCCC
* function value which is already stored in x0 ready to be called.
*/
SYM_FUNC_START(__arm_smccc_sve_check)
ldr_l x16, smccc_has_sve_hint
cbz x16, 2f
get_current_task x16
ldr x16, [x16, #TSK_TI_FLAGS]
tbnz x16, #TIF_FOREIGN_FPSTATE, 1f // Any live FP state?
tbnz x16, #TIF_SVE, 2f // Does that state include SVE?
1: orr x0, x0, ARM_SMCCC_1_3_SVE_HINT
2: ret
SYM_FUNC_END(__arm_smccc_sve_check)
EXPORT_SYMBOL(__arm_smccc_sve_check)
.macro SMCCC instr .macro SMCCC instr
stp x29, x30, [sp, #-16]!
mov x29, sp
alternative_if ARM64_SVE
bl __arm_smccc_sve_check
alternative_else_nop_endif
\instr #0 \instr #0
ldr x4, [sp, #16] ldr x4, [sp]
stp x0, x1, [x4, #ARM_SMCCC_RES_X0_OFFS] stp x0, x1, [x4, #ARM_SMCCC_RES_X0_OFFS]
stp x2, x3, [x4, #ARM_SMCCC_RES_X2_OFFS] stp x2, x3, [x4, #ARM_SMCCC_RES_X2_OFFS]
ldr x4, [sp, #24] ldr x4, [sp, #8]
cbz x4, 1f /* no quirk structure */ cbz x4, 1f /* no quirk structure */
ldr x9, [x4, #ARM_SMCCC_QUIRK_ID_OFFS] ldr x9, [x4, #ARM_SMCCC_QUIRK_ID_OFFS]
cmp x9, #ARM_SMCCC_QUIRK_QCOM_A6 cmp x9, #ARM_SMCCC_QUIRK_QCOM_A6
b.ne 1f b.ne 1f
str x6, [x4, ARM_SMCCC_QUIRK_STATE_OFFS] str x6, [x4, ARM_SMCCC_QUIRK_STATE_OFFS]
1: ldp x29, x30, [sp], #16 1: ret
ret
.endm .endm
/* /*

View File

@ -25,6 +25,7 @@
/* 64-bit segment value. */ /* 64-bit segment value. */
#define XKPRANGE_UC_SEG (0x8000) #define XKPRANGE_UC_SEG (0x8000)
#define XKPRANGE_CC_SEG (0x9000) #define XKPRANGE_CC_SEG (0x9000)
#define XKPRANGE_WC_SEG (0xa000)
#define XKVRANGE_VC_SEG (0xffff) #define XKVRANGE_VC_SEG (0xffff)
/* Cached */ /* Cached */
@ -41,20 +42,28 @@
#define XKPRANGE_UC_SHADOW_SIZE (XKPRANGE_UC_SIZE >> KASAN_SHADOW_SCALE_SHIFT) #define XKPRANGE_UC_SHADOW_SIZE (XKPRANGE_UC_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
#define XKPRANGE_UC_SHADOW_END (XKPRANGE_UC_KASAN_OFFSET + XKPRANGE_UC_SHADOW_SIZE) #define XKPRANGE_UC_SHADOW_END (XKPRANGE_UC_KASAN_OFFSET + XKPRANGE_UC_SHADOW_SIZE)
/* WriteCombine */
#define XKPRANGE_WC_START WRITECOMBINE_BASE
#define XKPRANGE_WC_SIZE XRANGE_SIZE
#define XKPRANGE_WC_KASAN_OFFSET XKPRANGE_UC_SHADOW_END
#define XKPRANGE_WC_SHADOW_SIZE (XKPRANGE_WC_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
#define XKPRANGE_WC_SHADOW_END (XKPRANGE_WC_KASAN_OFFSET + XKPRANGE_WC_SHADOW_SIZE)
/* VMALLOC (Cached or UnCached) */ /* VMALLOC (Cached or UnCached) */
#define XKVRANGE_VC_START MODULES_VADDR #define XKVRANGE_VC_START MODULES_VADDR
#define XKVRANGE_VC_SIZE round_up(KFENCE_AREA_END - MODULES_VADDR + 1, PGDIR_SIZE) #define XKVRANGE_VC_SIZE round_up(KFENCE_AREA_END - MODULES_VADDR + 1, PGDIR_SIZE)
#define XKVRANGE_VC_KASAN_OFFSET XKPRANGE_UC_SHADOW_END #define XKVRANGE_VC_KASAN_OFFSET XKPRANGE_WC_SHADOW_END
#define XKVRANGE_VC_SHADOW_SIZE (XKVRANGE_VC_SIZE >> KASAN_SHADOW_SCALE_SHIFT) #define XKVRANGE_VC_SHADOW_SIZE (XKVRANGE_VC_SIZE >> KASAN_SHADOW_SCALE_SHIFT)
#define XKVRANGE_VC_SHADOW_END (XKVRANGE_VC_KASAN_OFFSET + XKVRANGE_VC_SHADOW_SIZE) #define XKVRANGE_VC_SHADOW_END (XKVRANGE_VC_KASAN_OFFSET + XKVRANGE_VC_SHADOW_SIZE)
/* KAsan shadow memory start right after vmalloc. */ /* KAsan shadow memory start right after vmalloc. */
#define KASAN_SHADOW_START round_up(KFENCE_AREA_END, PGDIR_SIZE) #define KASAN_SHADOW_START round_up(KFENCE_AREA_END, PGDIR_SIZE)
#define KASAN_SHADOW_SIZE (XKVRANGE_VC_SHADOW_END - XKPRANGE_CC_KASAN_OFFSET) #define KASAN_SHADOW_SIZE (XKVRANGE_VC_SHADOW_END - XKPRANGE_CC_KASAN_OFFSET)
#define KASAN_SHADOW_END round_up(KASAN_SHADOW_START + KASAN_SHADOW_SIZE, PGDIR_SIZE) #define KASAN_SHADOW_END (round_up(KASAN_SHADOW_START + KASAN_SHADOW_SIZE, PGDIR_SIZE) - 1)
#define XKPRANGE_CC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_CC_KASAN_OFFSET) #define XKPRANGE_CC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_CC_KASAN_OFFSET)
#define XKPRANGE_UC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_UC_KASAN_OFFSET) #define XKPRANGE_UC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_UC_KASAN_OFFSET)
#define XKPRANGE_WC_SHADOW_OFFSET (KASAN_SHADOW_START + XKPRANGE_WC_KASAN_OFFSET)
#define XKVRANGE_VC_SHADOW_OFFSET (KASAN_SHADOW_START + XKVRANGE_VC_KASAN_OFFSET) #define XKVRANGE_VC_SHADOW_OFFSET (KASAN_SHADOW_START + XKVRANGE_VC_KASAN_OFFSET)
extern bool kasan_early_stage; extern bool kasan_early_stage;

View File

@ -113,10 +113,7 @@ struct page *tlb_virt_to_page(unsigned long kaddr);
extern int __virt_addr_valid(volatile void *kaddr); extern int __virt_addr_valid(volatile void *kaddr);
#define virt_addr_valid(kaddr) __virt_addr_valid((volatile void *)(kaddr)) #define virt_addr_valid(kaddr) __virt_addr_valid((volatile void *)(kaddr))
#define VM_DATA_DEFAULT_FLAGS \ #define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC
(VM_READ | VM_WRITE | \
((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : 0) | \
VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
#include <asm-generic/memory_model.h> #include <asm-generic/memory_model.h>
#include <asm-generic/getorder.h> #include <asm-generic/getorder.h>

View File

@ -58,48 +58,48 @@ void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
return ioremap_cache(phys, size); return ioremap_cache(phys, size);
} }
static int cpu_enumerated = 0;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static int set_processor_mask(u32 id, u32 flags) static int set_processor_mask(u32 id, u32 pass)
{ {
int nr_cpus; int cpu = -1, cpuid = id;
int cpu, cpuid = id;
if (!cpu_enumerated) if (num_processors >= NR_CPUS) {
nr_cpus = NR_CPUS;
else
nr_cpus = nr_cpu_ids;
if (num_processors >= nr_cpus) {
pr_warn(PREFIX "nr_cpus limit of %i reached." pr_warn(PREFIX "nr_cpus limit of %i reached."
" processor 0x%x ignored.\n", nr_cpus, cpuid); " processor 0x%x ignored.\n", NR_CPUS, cpuid);
return -ENODEV; return -ENODEV;
} }
if (cpuid == loongson_sysconf.boot_cpu_id) if (cpuid == loongson_sysconf.boot_cpu_id)
cpu = 0; cpu = 0;
else
cpu = find_first_zero_bit(cpumask_bits(cpu_present_mask), NR_CPUS);
if (!cpu_enumerated) switch (pass) {
set_cpu_possible(cpu, true); case 1: /* Pass 1 handle enabled processors */
if (cpu < 0)
if (flags & ACPI_MADT_ENABLED) { cpu = find_first_zero_bit(cpumask_bits(cpu_present_mask), NR_CPUS);
num_processors++; num_processors++;
set_cpu_present(cpu, true); set_cpu_present(cpu, true);
__cpu_number_map[cpuid] = cpu; break;
__cpu_logical_map[cpu] = cpuid; case 2: /* Pass 2 handle disabled processors */
} else if (cpu < 0)
cpu = find_first_zero_bit(cpumask_bits(cpu_possible_mask), NR_CPUS);
disabled_cpus++; disabled_cpus++;
break;
default:
return cpu;
}
set_cpu_possible(cpu, true);
__cpu_number_map[cpuid] = cpu;
__cpu_logical_map[cpu] = cpuid;
return cpu; return cpu;
} }
#endif #endif
static int __init static int __init
acpi_parse_processor(union acpi_subtable_headers *header, const unsigned long end) acpi_parse_p1_processor(union acpi_subtable_headers *header, const unsigned long end)
{ {
struct acpi_madt_core_pic *processor = NULL; struct acpi_madt_core_pic *processor = NULL;
@ -110,12 +110,29 @@ acpi_parse_processor(union acpi_subtable_headers *header, const unsigned long en
acpi_table_print_madt_entry(&header->common); acpi_table_print_madt_entry(&header->common);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
acpi_core_pic[processor->core_id] = *processor; acpi_core_pic[processor->core_id] = *processor;
set_processor_mask(processor->core_id, processor->flags); if (processor->flags & ACPI_MADT_ENABLED)
set_processor_mask(processor->core_id, 1);
#endif #endif
return 0; return 0;
} }
static int __init
acpi_parse_p2_processor(union acpi_subtable_headers *header, const unsigned long end)
{
struct acpi_madt_core_pic *processor = NULL;
processor = (struct acpi_madt_core_pic *)header;
if (BAD_MADT_ENTRY(processor, end))
return -EINVAL;
#ifdef CONFIG_SMP
if (!(processor->flags & ACPI_MADT_ENABLED))
set_processor_mask(processor->core_id, 2);
#endif
return 0;
}
static int __init static int __init
acpi_parse_eio_master(union acpi_subtable_headers *header, const unsigned long end) acpi_parse_eio_master(union acpi_subtable_headers *header, const unsigned long end)
{ {
@ -143,12 +160,14 @@ static void __init acpi_process_madt(void)
} }
#endif #endif
acpi_table_parse_madt(ACPI_MADT_TYPE_CORE_PIC, acpi_table_parse_madt(ACPI_MADT_TYPE_CORE_PIC,
acpi_parse_processor, MAX_CORE_PIC); acpi_parse_p1_processor, MAX_CORE_PIC);
acpi_table_parse_madt(ACPI_MADT_TYPE_CORE_PIC,
acpi_parse_p2_processor, MAX_CORE_PIC);
acpi_table_parse_madt(ACPI_MADT_TYPE_EIO_PIC, acpi_table_parse_madt(ACPI_MADT_TYPE_EIO_PIC,
acpi_parse_eio_master, MAX_IO_PICS); acpi_parse_eio_master, MAX_IO_PICS);
cpu_enumerated = 1;
loongson_sysconf.nr_cpus = num_processors; loongson_sysconf.nr_cpus = num_processors;
} }
@ -310,6 +329,10 @@ static int __ref acpi_map_cpu2node(acpi_handle handle, int cpu, int physid)
int nid; int nid;
nid = acpi_get_node(handle); nid = acpi_get_node(handle);
if (nid != NUMA_NO_NODE)
nid = early_cpu_to_node(cpu);
if (nid != NUMA_NO_NODE) { if (nid != NUMA_NO_NODE) {
set_cpuid_to_node(physid, nid); set_cpuid_to_node(physid, nid);
node_set(nid, numa_nodes_parsed); node_set(nid, numa_nodes_parsed);
@ -324,12 +347,14 @@ int acpi_map_cpu(acpi_handle handle, phys_cpuid_t physid, u32 acpi_id, int *pcpu
{ {
int cpu; int cpu;
cpu = set_processor_mask(physid, ACPI_MADT_ENABLED); cpu = cpu_number_map(physid);
if (cpu < 0) { if (cpu < 0 || cpu >= nr_cpu_ids) {
pr_info(PREFIX "Unable to map lapic to logical cpu number\n"); pr_info(PREFIX "Unable to map lapic to logical cpu number\n");
return cpu; return -ERANGE;
} }
num_processors++;
set_cpu_present(cpu, true);
acpi_map_cpu2node(handle, cpu, physid); acpi_map_cpu2node(handle, cpu, physid);
*pcpu = cpu; *pcpu = cpu;

View File

@ -51,11 +51,18 @@ static u64 paravt_steal_clock(int cpu)
} }
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static struct smp_ops native_ops;
static void pv_send_ipi_single(int cpu, unsigned int action) static void pv_send_ipi_single(int cpu, unsigned int action)
{ {
int min, old; int min, old;
irq_cpustat_t *info = &per_cpu(irq_stat, cpu); irq_cpustat_t *info = &per_cpu(irq_stat, cpu);
if (unlikely(action == ACTION_BOOT_CPU)) {
native_ops.send_ipi_single(cpu, action);
return;
}
old = atomic_fetch_or(BIT(action), &info->message); old = atomic_fetch_or(BIT(action), &info->message);
if (old) if (old)
return; return;
@ -75,6 +82,11 @@ static void pv_send_ipi_mask(const struct cpumask *mask, unsigned int action)
if (cpumask_empty(mask)) if (cpumask_empty(mask))
return; return;
if (unlikely(action == ACTION_BOOT_CPU)) {
native_ops.send_ipi_mask(mask, action);
return;
}
action = BIT(action); action = BIT(action);
for_each_cpu(i, mask) { for_each_cpu(i, mask) {
info = &per_cpu(irq_stat, i); info = &per_cpu(irq_stat, i);
@ -147,6 +159,8 @@ static void pv_init_ipi(void)
{ {
int r, swi; int r, swi;
/* Init native ipi irq for ACTION_BOOT_CPU */
native_ops.init_ipi();
swi = get_percpu_irq(INT_SWI0); swi = get_percpu_irq(INT_SWI0);
if (swi < 0) if (swi < 0)
panic("SWI0 IRQ mapping failed\n"); panic("SWI0 IRQ mapping failed\n");
@ -193,6 +207,7 @@ int __init pv_ipi_init(void)
return 0; return 0;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
native_ops = mp_ops;
mp_ops.init_ipi = pv_init_ipi; mp_ops.init_ipi = pv_init_ipi;
mp_ops.send_ipi_single = pv_send_ipi_single; mp_ops.send_ipi_single = pv_send_ipi_single;
mp_ops.send_ipi_mask = pv_send_ipi_mask; mp_ops.send_ipi_mask = pv_send_ipi_mask;

View File

@ -302,7 +302,7 @@ static void __init fdt_smp_setup(void)
__cpu_number_map[cpuid] = cpu; __cpu_number_map[cpuid] = cpu;
__cpu_logical_map[cpu] = cpuid; __cpu_logical_map[cpu] = cpuid;
early_numa_add_cpu(cpu, 0); early_numa_add_cpu(cpuid, 0);
set_cpuid_to_node(cpuid, 0); set_cpuid_to_node(cpuid, 0);
} }
@ -331,11 +331,11 @@ void __init loongson_prepare_cpus(unsigned int max_cpus)
int i = 0; int i = 0;
parse_acpi_topology(); parse_acpi_topology();
cpu_data[0].global_id = cpu_logical_map(0);
for (i = 0; i < loongson_sysconf.nr_cpus; i++) { for (i = 0; i < loongson_sysconf.nr_cpus; i++) {
set_cpu_present(i, true); set_cpu_present(i, true);
csr_mail_send(0, __cpu_logical_map[i], 0); csr_mail_send(0, __cpu_logical_map[i], 0);
cpu_data[i].global_id = __cpu_logical_map[i];
} }
per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE; per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
@ -380,6 +380,7 @@ void loongson_init_secondary(void)
cpu_logical_map(cpu) / loongson_sysconf.cores_per_package; cpu_logical_map(cpu) / loongson_sysconf.cores_per_package;
cpu_data[cpu].core = pptt_enabled ? cpu_data[cpu].core : cpu_data[cpu].core = pptt_enabled ? cpu_data[cpu].core :
cpu_logical_map(cpu) % loongson_sysconf.cores_per_package; cpu_logical_map(cpu) % loongson_sysconf.cores_per_package;
cpu_data[cpu].global_id = cpu_logical_map(cpu);
} }
void loongson_smp_finish(void) void loongson_smp_finish(void)

View File

@ -13,6 +13,13 @@
static pgd_t kasan_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE); static pgd_t kasan_pg_dir[PTRS_PER_PGD] __initdata __aligned(PAGE_SIZE);
#ifdef __PAGETABLE_P4D_FOLDED
#define __pgd_none(early, pgd) (0)
#else
#define __pgd_none(early, pgd) (early ? (pgd_val(pgd) == 0) : \
(__pa(pgd_val(pgd)) == (unsigned long)__pa(kasan_early_shadow_p4d)))
#endif
#ifdef __PAGETABLE_PUD_FOLDED #ifdef __PAGETABLE_PUD_FOLDED
#define __p4d_none(early, p4d) (0) #define __p4d_none(early, p4d) (0)
#else #else
@ -55,6 +62,9 @@ void *kasan_mem_to_shadow(const void *addr)
case XKPRANGE_UC_SEG: case XKPRANGE_UC_SEG:
offset = XKPRANGE_UC_SHADOW_OFFSET; offset = XKPRANGE_UC_SHADOW_OFFSET;
break; break;
case XKPRANGE_WC_SEG:
offset = XKPRANGE_WC_SHADOW_OFFSET;
break;
case XKVRANGE_VC_SEG: case XKVRANGE_VC_SEG:
offset = XKVRANGE_VC_SHADOW_OFFSET; offset = XKVRANGE_VC_SHADOW_OFFSET;
break; break;
@ -79,6 +89,8 @@ const void *kasan_shadow_to_mem(const void *shadow_addr)
if (addr >= XKVRANGE_VC_SHADOW_OFFSET) if (addr >= XKVRANGE_VC_SHADOW_OFFSET)
return (void *)(((addr - XKVRANGE_VC_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + XKVRANGE_VC_START); return (void *)(((addr - XKVRANGE_VC_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + XKVRANGE_VC_START);
else if (addr >= XKPRANGE_WC_SHADOW_OFFSET)
return (void *)(((addr - XKPRANGE_WC_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + XKPRANGE_WC_START);
else if (addr >= XKPRANGE_UC_SHADOW_OFFSET) else if (addr >= XKPRANGE_UC_SHADOW_OFFSET)
return (void *)(((addr - XKPRANGE_UC_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + XKPRANGE_UC_START); return (void *)(((addr - XKPRANGE_UC_SHADOW_OFFSET) << KASAN_SHADOW_SCALE_SHIFT) + XKPRANGE_UC_START);
else if (addr >= XKPRANGE_CC_SHADOW_OFFSET) else if (addr >= XKPRANGE_CC_SHADOW_OFFSET)
@ -142,6 +154,19 @@ static pud_t *__init kasan_pud_offset(p4d_t *p4dp, unsigned long addr, int node,
return pud_offset(p4dp, addr); return pud_offset(p4dp, addr);
} }
static p4d_t *__init kasan_p4d_offset(pgd_t *pgdp, unsigned long addr, int node, bool early)
{
if (__pgd_none(early, pgdp_get(pgdp))) {
phys_addr_t p4d_phys = early ?
__pa_symbol(kasan_early_shadow_p4d) : kasan_alloc_zeroed_page(node);
if (!early)
memcpy(__va(p4d_phys), kasan_early_shadow_p4d, sizeof(kasan_early_shadow_p4d));
pgd_populate(&init_mm, pgdp, (p4d_t *)__va(p4d_phys));
}
return p4d_offset(pgdp, addr);
}
static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr, static void __init kasan_pte_populate(pmd_t *pmdp, unsigned long addr,
unsigned long end, int node, bool early) unsigned long end, int node, bool early)
{ {
@ -178,19 +203,19 @@ static void __init kasan_pud_populate(p4d_t *p4dp, unsigned long addr,
do { do {
next = pud_addr_end(addr, end); next = pud_addr_end(addr, end);
kasan_pmd_populate(pudp, addr, next, node, early); kasan_pmd_populate(pudp, addr, next, node, early);
} while (pudp++, addr = next, addr != end); } while (pudp++, addr = next, addr != end && __pud_none(early, READ_ONCE(*pudp)));
} }
static void __init kasan_p4d_populate(pgd_t *pgdp, unsigned long addr, static void __init kasan_p4d_populate(pgd_t *pgdp, unsigned long addr,
unsigned long end, int node, bool early) unsigned long end, int node, bool early)
{ {
unsigned long next; unsigned long next;
p4d_t *p4dp = p4d_offset(pgdp, addr); p4d_t *p4dp = kasan_p4d_offset(pgdp, addr, node, early);
do { do {
next = p4d_addr_end(addr, end); next = p4d_addr_end(addr, end);
kasan_pud_populate(p4dp, addr, next, node, early); kasan_pud_populate(p4dp, addr, next, node, early);
} while (p4dp++, addr = next, addr != end); } while (p4dp++, addr = next, addr != end && __p4d_none(early, READ_ONCE(*p4dp)));
} }
static void __init kasan_pgd_populate(unsigned long addr, unsigned long end, static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
@ -218,7 +243,7 @@ static void __init kasan_map_populate(unsigned long start, unsigned long end,
asmlinkage void __init kasan_early_init(void) asmlinkage void __init kasan_early_init(void)
{ {
BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE)); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE)); BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END + 1, PGDIR_SIZE));
} }
static inline void kasan_set_pgd(pgd_t *pgdp, pgd_t pgdval) static inline void kasan_set_pgd(pgd_t *pgdp, pgd_t pgdval)
@ -233,7 +258,7 @@ static void __init clear_pgds(unsigned long start, unsigned long end)
* swapper_pg_dir. pgd_clear() can't be used * swapper_pg_dir. pgd_clear() can't be used
* here because it's nop on 2,3-level pagetable setups * here because it's nop on 2,3-level pagetable setups
*/ */
for (; start < end; start += PGDIR_SIZE) for (; start < end; start = pgd_addr_end(start, end))
kasan_set_pgd((pgd_t *)pgd_offset_k(start), __pgd(0)); kasan_set_pgd((pgd_t *)pgd_offset_k(start), __pgd(0));
} }
@ -242,6 +267,17 @@ void __init kasan_init(void)
u64 i; u64 i;
phys_addr_t pa_start, pa_end; phys_addr_t pa_start, pa_end;
/*
* If PGDIR_SIZE is too large for cpu_vabits, KASAN_SHADOW_END will
* overflow UINTPTR_MAX and then looks like a user space address.
* For example, PGDIR_SIZE of CONFIG_4KB_4LEVEL is 2^39, which is too
* large for Loongson-2K series whose cpu_vabits = 39.
*/
if (KASAN_SHADOW_END < vm_map_base) {
pr_warn("PGDIR_SIZE too large for cpu_vabits, KernelAddressSanitizer disabled.\n");
return;
}
/* /*
* PGD was populated as invalid_pmd_table or invalid_pud_table * PGD was populated as invalid_pmd_table or invalid_pud_table
* in pagetable_init() which depends on how many levels of page * in pagetable_init() which depends on how many levels of page

View File

@ -2,6 +2,7 @@
#ifndef __ASM_MMAN_H__ #ifndef __ASM_MMAN_H__
#define __ASM_MMAN_H__ #define __ASM_MMAN_H__
#include <linux/fs.h>
#include <uapi/asm/mman.h> #include <uapi/asm/mman.h>
/* PARISC cannot allow mdwe as it needs writable stacks */ /* PARISC cannot allow mdwe as it needs writable stacks */
@ -11,7 +12,7 @@ static inline bool arch_memory_deny_write_exec_supported(void)
} }
#define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported #define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_supported
static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags) static inline unsigned long arch_calc_vm_flag_bits(struct file *file, unsigned long flags)
{ {
/* /*
* The stack on parisc grows upwards, so if userspace requests memory * The stack on parisc grows upwards, so if userspace requests memory
@ -23,6 +24,6 @@ static inline unsigned long arch_calc_vm_flag_bits(unsigned long flags)
return 0; return 0;
} }
#define arch_calc_vm_flag_bits(flags) arch_calc_vm_flag_bits(flags) #define arch_calc_vm_flag_bits(file, flags) arch_calc_vm_flag_bits(file, flags)
#endif /* __ASM_MMAN_H__ */ #endif /* __ASM_MMAN_H__ */

View File

@ -4898,6 +4898,18 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
BOOK3S_INTERRUPT_EXTERNAL, 0); BOOK3S_INTERRUPT_EXTERNAL, 0);
else else
lpcr |= LPCR_MER; lpcr |= LPCR_MER;
} else {
/*
* L1's copy of L2's LPCR (vcpu->arch.vcore->lpcr) can get its MER bit
* unexpectedly set - for e.g. during NMI handling when all register
* states are synchronized from L0 to L1. L1 needs to inform L0 about
* MER=1 only when there are pending external interrupts.
* In the above if check, MER bit is set if there are pending
* external interrupts. Hence, explicity mask off MER bit
* here as otherwise it may generate spurious interrupts in L2 KVM
* causing an endless loop, which results in L2 guest getting hung.
*/
lpcr &= ~LPCR_MER;
} }
} else if (vcpu->arch.pending_exceptions || } else if (vcpu->arch.pending_exceptions ||
vcpu->arch.doorbell_request || vcpu->arch.doorbell_request ||

View File

@ -305,9 +305,4 @@ static inline void freq_invariance_set_perf_ratio(u64 ratio, bool turbo_disabled
extern void arch_scale_freq_tick(void); extern void arch_scale_freq_tick(void);
#define arch_scale_freq_tick arch_scale_freq_tick #define arch_scale_freq_tick arch_scale_freq_tick
#ifdef CONFIG_ACPI_CPPC_LIB
void init_freq_invariance_cppc(void);
#define arch_init_invariance_cppc init_freq_invariance_cppc
#endif
#endif /* _ASM_X86_TOPOLOGY_H */ #endif /* _ASM_X86_TOPOLOGY_H */

View File

@ -110,7 +110,7 @@ static void amd_set_max_freq_ratio(void)
static DEFINE_MUTEX(freq_invariance_lock); static DEFINE_MUTEX(freq_invariance_lock);
void init_freq_invariance_cppc(void) static inline void init_freq_invariance_cppc(void)
{ {
static bool init_done; static bool init_done;
@ -127,6 +127,11 @@ void init_freq_invariance_cppc(void)
mutex_unlock(&freq_invariance_lock); mutex_unlock(&freq_invariance_lock);
} }
void acpi_processor_init_invariance_cppc(void)
{
init_freq_invariance_cppc();
}
/* /*
* Get the highest performance register value. * Get the highest performance register value.
* @cpu: CPU from which to get highest performance. * @cpu: CPU from which to get highest performance.

View File

@ -2629,19 +2629,26 @@ void kvm_apic_update_apicv(struct kvm_vcpu *vcpu)
{ {
struct kvm_lapic *apic = vcpu->arch.apic; struct kvm_lapic *apic = vcpu->arch.apic;
if (apic->apicv_active) { /*
/* irr_pending is always true when apicv is activated. */ * When APICv is enabled, KVM must always search the IRR for a pending
apic->irr_pending = true; * IRQ, as other vCPUs and devices can set IRR bits even if the vCPU
* isn't running. If APICv is disabled, KVM _should_ search the IRR
* for a pending IRQ. But KVM currently doesn't ensure *all* hardware,
* e.g. CPUs and IOMMUs, has seen the change in state, i.e. searching
* the IRR at this time could race with IRQ delivery from hardware that
* still sees APICv as being enabled.
*
* FIXME: Ensure other vCPUs and devices observe the change in APICv
* state prior to updating KVM's metadata caches, so that KVM
* can safely search the IRR and set irr_pending accordingly.
*/
apic->irr_pending = true;
if (apic->apicv_active)
apic->isr_count = 1; apic->isr_count = 1;
} else { else
/*
* Don't clear irr_pending, searching the IRR can race with
* updates from the CPU as APICv is still active from hardware's
* perspective. The flag will be cleared as appropriate when
* KVM injects the interrupt.
*/
apic->isr_count = count_vectors(apic->regs + APIC_ISR); apic->isr_count = count_vectors(apic->regs + APIC_ISR);
}
apic->highest_isr_cache = -1; apic->highest_isr_cache = -1;
} }

View File

@ -450,8 +450,11 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp,
goto e_free; goto e_free;
/* This needs to happen after SEV/SNP firmware initialization. */ /* This needs to happen after SEV/SNP firmware initialization. */
if (vm_type == KVM_X86_SNP_VM && snp_guest_req_init(kvm)) if (vm_type == KVM_X86_SNP_VM) {
goto e_free; ret = snp_guest_req_init(kvm);
if (ret)
goto e_free;
}
INIT_LIST_HEAD(&sev->regions_list); INIT_LIST_HEAD(&sev->regions_list);
INIT_LIST_HEAD(&sev->mirror_vms); INIT_LIST_HEAD(&sev->mirror_vms);
@ -2212,10 +2215,6 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (sev->snp_context) if (sev->snp_context)
return -EINVAL; return -EINVAL;
sev->snp_context = snp_context_create(kvm, argp);
if (!sev->snp_context)
return -ENOTTY;
if (params.flags) if (params.flags)
return -EINVAL; return -EINVAL;
@ -2230,6 +2229,10 @@ static int snp_launch_start(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (params.policy & SNP_POLICY_MASK_SINGLE_SOCKET) if (params.policy & SNP_POLICY_MASK_SINGLE_SOCKET)
return -EINVAL; return -EINVAL;
sev->snp_context = snp_context_create(kvm, argp);
if (!sev->snp_context)
return -ENOTTY;
start.gctx_paddr = __psp_pa(sev->snp_context); start.gctx_paddr = __psp_pa(sev->snp_context);
start.policy = params.policy; start.policy = params.policy;
memcpy(start.gosvw, params.gosvw, sizeof(params.gosvw)); memcpy(start.gosvw, params.gosvw, sizeof(params.gosvw));

View File

@ -1197,11 +1197,14 @@ static void nested_vmx_transition_tlb_flush(struct kvm_vcpu *vcpu,
kvm_hv_nested_transtion_tlb_flush(vcpu, enable_ept); kvm_hv_nested_transtion_tlb_flush(vcpu, enable_ept);
/* /*
* If vmcs12 doesn't use VPID, L1 expects linear and combined mappings * If VPID is disabled, then guest TLB accesses use VPID=0, i.e. the
* for *all* contexts to be flushed on VM-Enter/VM-Exit, i.e. it's a * same VPID as the host, and so architecturally, linear and combined
* full TLB flush from the guest's perspective. This is required even * mappings for VPID=0 must be flushed at VM-Enter and VM-Exit. KVM
* if VPID is disabled in the host as KVM may need to synchronize the * emulates L2 sharing L1's VPID=0 by using vpid01 while running L2,
* MMU in response to the guest TLB flush. * and so KVM must also emulate TLB flush of VPID=0, i.e. vpid01. This
* is required if VPID is disabled in KVM, as a TLB flush (there are no
* VPIDs) still occurs from L1's perspective, and KVM may need to
* synchronize the MMU in response to the guest TLB flush.
* *
* Note, using TLB_FLUSH_GUEST is correct even if nested EPT is in use. * Note, using TLB_FLUSH_GUEST is correct even if nested EPT is in use.
* EPT is a special snowflake, as guest-physical mappings aren't * EPT is a special snowflake, as guest-physical mappings aren't
@ -2315,6 +2318,17 @@ static void prepare_vmcs02_early_rare(struct vcpu_vmx *vmx,
vmcs_write64(VMCS_LINK_POINTER, INVALID_GPA); vmcs_write64(VMCS_LINK_POINTER, INVALID_GPA);
/*
* If VPID is disabled, then guest TLB accesses use VPID=0, i.e. the
* same VPID as the host. Emulate this behavior by using vpid01 for L2
* if VPID is disabled in vmcs12. Note, if VPID is disabled, VM-Enter
* and VM-Exit are architecturally required to flush VPID=0, but *only*
* VPID=0. I.e. using vpid02 would be ok (so long as KVM emulates the
* required flushes), but doing so would cause KVM to over-flush. E.g.
* if L1 runs L2 X with VPID12=1, then runs L2 Y with VPID12 disabled,
* and then runs L2 X again, then KVM can and should retain TLB entries
* for VPID12=1.
*/
if (enable_vpid) { if (enable_vpid) {
if (nested_cpu_has_vpid(vmcs12) && vmx->nested.vpid02) if (nested_cpu_has_vpid(vmcs12) && vmx->nested.vpid02)
vmcs_write16(VIRTUAL_PROCESSOR_ID, vmx->nested.vpid02); vmcs_write16(VIRTUAL_PROCESSOR_ID, vmx->nested.vpid02);
@ -5950,6 +5964,12 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
return nested_vmx_fail(vcpu, return nested_vmx_fail(vcpu,
VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID); VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID);
/*
* Always flush the effective vpid02, i.e. never flush the current VPID
* and never explicitly flush vpid01. INVVPID targets a VPID, not a
* VMCS, and so whether or not the current vmcs12 has VPID enabled is
* irrelevant (and there may not be a loaded vmcs12).
*/
vpid02 = nested_get_vpid02(vcpu); vpid02 = nested_get_vpid02(vcpu);
switch (type) { switch (type) {
case VMX_VPID_EXTENT_INDIVIDUAL_ADDR: case VMX_VPID_EXTENT_INDIVIDUAL_ADDR:

View File

@ -217,9 +217,11 @@ module_param(ple_window_shrink, uint, 0444);
static unsigned int ple_window_max = KVM_VMX_DEFAULT_PLE_WINDOW_MAX; static unsigned int ple_window_max = KVM_VMX_DEFAULT_PLE_WINDOW_MAX;
module_param(ple_window_max, uint, 0444); module_param(ple_window_max, uint, 0444);
/* Default is SYSTEM mode, 1 for host-guest mode */ /* Default is SYSTEM mode, 1 for host-guest mode (which is BROKEN) */
int __read_mostly pt_mode = PT_MODE_SYSTEM; int __read_mostly pt_mode = PT_MODE_SYSTEM;
#ifdef CONFIG_BROKEN
module_param(pt_mode, int, S_IRUGO); module_param(pt_mode, int, S_IRUGO);
#endif
struct x86_pmu_lbr __ro_after_init vmx_lbr_caps; struct x86_pmu_lbr __ro_after_init vmx_lbr_caps;
@ -3216,7 +3218,7 @@ void vmx_flush_tlb_all(struct kvm_vcpu *vcpu)
static inline int vmx_get_current_vpid(struct kvm_vcpu *vcpu) static inline int vmx_get_current_vpid(struct kvm_vcpu *vcpu)
{ {
if (is_guest_mode(vcpu)) if (is_guest_mode(vcpu) && nested_cpu_has_vpid(get_vmcs12(vcpu)))
return nested_get_vpid02(vcpu); return nested_get_vpid02(vcpu);
return to_vmx(vcpu)->vpid; return to_vmx(vcpu)->vpid;
} }

View File

@ -671,10 +671,6 @@ static int pcc_data_alloc(int pcc_ss_id)
* ) * )
*/ */
#ifndef arch_init_invariance_cppc
static inline void arch_init_invariance_cppc(void) { }
#endif
/** /**
* acpi_cppc_processor_probe - Search for per CPU _CPC objects. * acpi_cppc_processor_probe - Search for per CPU _CPC objects.
* @pr: Ptr to acpi_processor containing this CPU's logical ID. * @pr: Ptr to acpi_processor containing this CPU's logical ID.
@ -905,8 +901,6 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
goto out_free; goto out_free;
} }
arch_init_invariance_cppc();
kfree(output.pointer); kfree(output.pointer);
return 0; return 0;

View File

@ -237,6 +237,9 @@ static struct notifier_block acpi_processor_notifier_block = {
.notifier_call = acpi_processor_notifier, .notifier_call = acpi_processor_notifier,
}; };
void __weak acpi_processor_init_invariance_cppc(void)
{ }
/* /*
* We keep the driver loaded even when ACPI is not running. * We keep the driver loaded even when ACPI is not running.
* This is needed for the powernow-k8 driver, that works even without * This is needed for the powernow-k8 driver, that works even without
@ -270,6 +273,12 @@ static int __init acpi_processor_driver_init(void)
NULL, acpi_soft_cpu_dead); NULL, acpi_soft_cpu_dead);
acpi_processor_throttling_init(); acpi_processor_throttling_init();
/*
* Frequency invariance calculations on AMD platforms can't be run until
* after acpi_cppc_processor_probe() has been called for all online CPUs
*/
acpi_processor_init_invariance_cppc();
return 0; return 0;
err: err:
driver_unregister(&acpi_processor_driver); driver_unregister(&acpi_processor_driver);

View File

@ -366,7 +366,7 @@ void __weak freq_inv_set_max_ratio(int cpu, u64 max_rate)
#ifdef CONFIG_ACPI_CPPC_LIB #ifdef CONFIG_ACPI_CPPC_LIB
#include <acpi/cppc_acpi.h> #include <acpi/cppc_acpi.h>
void topology_init_cpu_capacity_cppc(void) static inline void topology_init_cpu_capacity_cppc(void)
{ {
u64 capacity, capacity_scale = 0; u64 capacity, capacity_scale = 0;
struct cppc_perf_caps perf_caps; struct cppc_perf_caps perf_caps;
@ -417,6 +417,10 @@ void topology_init_cpu_capacity_cppc(void)
exit: exit:
free_raw_capacity(); free_raw_capacity();
} }
void acpi_processor_init_invariance_cppc(void)
{
topology_init_cpu_capacity_cppc();
}
#endif #endif
#ifdef CONFIG_CPU_FREQ #ifdef CONFIG_CPU_FREQ

View File

@ -3288,13 +3288,12 @@ static int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
case INTEL_TLV_TEST_EXCEPTION: case INTEL_TLV_TEST_EXCEPTION:
/* Generate devcoredump from exception */ /* Generate devcoredump from exception */
if (!hci_devcd_init(hdev, skb->len)) { if (!hci_devcd_init(hdev, skb->len)) {
hci_devcd_append(hdev, skb); hci_devcd_append(hdev, skb_clone(skb, GFP_ATOMIC));
hci_devcd_complete(hdev); hci_devcd_complete(hdev);
} else { } else {
bt_dev_err(hdev, "Failed to generate devcoredump"); bt_dev_err(hdev, "Failed to generate devcoredump");
kfree_skb(skb);
} }
return 0; break;
default: default:
bt_dev_err(hdev, "Invalid exception type %02X", tlv->val[0]); bt_dev_err(hdev, "Invalid exception type %02X", tlv->val[0]);
} }

View File

@ -146,6 +146,26 @@ void tpm_buf_append_u32(struct tpm_buf *buf, const u32 value)
} }
EXPORT_SYMBOL_GPL(tpm_buf_append_u32); EXPORT_SYMBOL_GPL(tpm_buf_append_u32);
/**
* tpm_buf_append_handle() - Add a handle
* @chip: &tpm_chip instance
* @buf: &tpm_buf instance
* @handle: a TPM object handle
*
* Add a handle to the buffer, and increase the count tracking the number of
* handles in the command buffer. Works only for command buffers.
*/
void tpm_buf_append_handle(struct tpm_chip *chip, struct tpm_buf *buf, u32 handle)
{
if (buf->flags & TPM_BUF_TPM2B) {
dev_err(&chip->dev, "Invalid buffer type (TPM2B)\n");
return;
}
tpm_buf_append_u32(buf, handle);
buf->handles++;
}
/** /**
* tpm_buf_read() - Read from a TPM buffer * tpm_buf_read() - Read from a TPM buffer
* @buf: &tpm_buf instance * @buf: &tpm_buf instance

View File

@ -14,6 +14,10 @@
#include "tpm.h" #include "tpm.h"
#include <crypto/hash_info.h> #include <crypto/hash_info.h>
static bool disable_pcr_integrity;
module_param(disable_pcr_integrity, bool, 0444);
MODULE_PARM_DESC(disable_pcr_integrity, "Disable integrity protection of TPM2_PCR_Extend");
static struct tpm2_hash tpm2_hash_map[] = { static struct tpm2_hash tpm2_hash_map[] = {
{HASH_ALGO_SHA1, TPM_ALG_SHA1}, {HASH_ALGO_SHA1, TPM_ALG_SHA1},
{HASH_ALGO_SHA256, TPM_ALG_SHA256}, {HASH_ALGO_SHA256, TPM_ALG_SHA256},
@ -232,18 +236,26 @@ int tpm2_pcr_extend(struct tpm_chip *chip, u32 pcr_idx,
int rc; int rc;
int i; int i;
rc = tpm2_start_auth_session(chip); if (!disable_pcr_integrity) {
if (rc) rc = tpm2_start_auth_session(chip);
return rc; if (rc)
return rc;
}
rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_PCR_EXTEND); rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_PCR_EXTEND);
if (rc) { if (rc) {
tpm2_end_auth_session(chip); if (!disable_pcr_integrity)
tpm2_end_auth_session(chip);
return rc; return rc;
} }
tpm_buf_append_name(chip, &buf, pcr_idx, NULL); if (!disable_pcr_integrity) {
tpm_buf_append_hmac_session(chip, &buf, 0, NULL, 0); tpm_buf_append_name(chip, &buf, pcr_idx, NULL);
tpm_buf_append_hmac_session(chip, &buf, 0, NULL, 0);
} else {
tpm_buf_append_handle(chip, &buf, pcr_idx);
tpm_buf_append_auth(chip, &buf, 0, NULL, 0);
}
tpm_buf_append_u32(&buf, chip->nr_allocated_banks); tpm_buf_append_u32(&buf, chip->nr_allocated_banks);
@ -253,9 +265,11 @@ int tpm2_pcr_extend(struct tpm_chip *chip, u32 pcr_idx,
chip->allocated_banks[i].digest_size); chip->allocated_banks[i].digest_size);
} }
tpm_buf_fill_hmac_session(chip, &buf); if (!disable_pcr_integrity)
tpm_buf_fill_hmac_session(chip, &buf);
rc = tpm_transmit_cmd(chip, &buf, 0, "attempting extend a PCR value"); rc = tpm_transmit_cmd(chip, &buf, 0, "attempting extend a PCR value");
rc = tpm_buf_check_hmac_response(chip, &buf, rc); if (!disable_pcr_integrity)
rc = tpm_buf_check_hmac_response(chip, &buf, rc);
tpm_buf_destroy(&buf); tpm_buf_destroy(&buf);

View File

@ -237,9 +237,7 @@ void tpm_buf_append_name(struct tpm_chip *chip, struct tpm_buf *buf,
#endif #endif
if (!tpm2_chip_auth(chip)) { if (!tpm2_chip_auth(chip)) {
tpm_buf_append_u32(buf, handle); tpm_buf_append_handle(chip, buf, handle);
/* count the number of handles in the upper bits of flags */
buf->handles++;
return; return;
} }
@ -272,6 +270,31 @@ void tpm_buf_append_name(struct tpm_chip *chip, struct tpm_buf *buf,
} }
EXPORT_SYMBOL_GPL(tpm_buf_append_name); EXPORT_SYMBOL_GPL(tpm_buf_append_name);
void tpm_buf_append_auth(struct tpm_chip *chip, struct tpm_buf *buf,
u8 attributes, u8 *passphrase, int passphrase_len)
{
/* offset tells us where the sessions area begins */
int offset = buf->handles * 4 + TPM_HEADER_SIZE;
u32 len = 9 + passphrase_len;
if (tpm_buf_length(buf) != offset) {
/* not the first session so update the existing length */
len += get_unaligned_be32(&buf->data[offset]);
put_unaligned_be32(len, &buf->data[offset]);
} else {
tpm_buf_append_u32(buf, len);
}
/* auth handle */
tpm_buf_append_u32(buf, TPM2_RS_PW);
/* nonce */
tpm_buf_append_u16(buf, 0);
/* attributes */
tpm_buf_append_u8(buf, 0);
/* passphrase */
tpm_buf_append_u16(buf, passphrase_len);
tpm_buf_append(buf, passphrase, passphrase_len);
}
/** /**
* tpm_buf_append_hmac_session() - Append a TPM session element * tpm_buf_append_hmac_session() - Append a TPM session element
* @chip: the TPM chip structure * @chip: the TPM chip structure
@ -309,26 +332,8 @@ void tpm_buf_append_hmac_session(struct tpm_chip *chip, struct tpm_buf *buf,
#endif #endif
if (!tpm2_chip_auth(chip)) { if (!tpm2_chip_auth(chip)) {
/* offset tells us where the sessions area begins */ tpm_buf_append_auth(chip, buf, attributes, passphrase,
int offset = buf->handles * 4 + TPM_HEADER_SIZE; passphrase_len);
u32 len = 9 + passphrase_len;
if (tpm_buf_length(buf) != offset) {
/* not the first session so update the existing length */
len += get_unaligned_be32(&buf->data[offset]);
put_unaligned_be32(len, &buf->data[offset]);
} else {
tpm_buf_append_u32(buf, len);
}
/* auth handle */
tpm_buf_append_u32(buf, TPM2_RS_PW);
/* nonce */
tpm_buf_append_u16(buf, 0);
/* attributes */
tpm_buf_append_u8(buf, 0);
/* passphrase */
tpm_buf_append_u16(buf, passphrase_len);
tpm_buf_append(buf, passphrase, passphrase_len);
return; return;
} }
@ -948,10 +953,13 @@ static int tpm2_load_null(struct tpm_chip *chip, u32 *null_key)
/* Deduce from the name change TPM interference: */ /* Deduce from the name change TPM interference: */
dev_err(&chip->dev, "null key integrity check failed\n"); dev_err(&chip->dev, "null key integrity check failed\n");
tpm2_flush_context(chip, tmp_null_key); tpm2_flush_context(chip, tmp_null_key);
chip->flags |= TPM_CHIP_FLAG_DISABLE;
err: err:
return rc ? -ENODEV : 0; if (rc) {
chip->flags |= TPM_CHIP_FLAG_DISABLE;
rc = -ENODEV;
}
return rc;
} }
/** /**

View File

@ -40,7 +40,7 @@
#define PLL_USER_CTL(p) ((p)->offset + (p)->regs[PLL_OFF_USER_CTL]) #define PLL_USER_CTL(p) ((p)->offset + (p)->regs[PLL_OFF_USER_CTL])
# define PLL_POST_DIV_SHIFT 8 # define PLL_POST_DIV_SHIFT 8
# define PLL_POST_DIV_MASK(p) GENMASK((p)->width - 1, 0) # define PLL_POST_DIV_MASK(p) GENMASK((p)->width ? (p)->width - 1 : 3, 0)
# define PLL_ALPHA_MSB BIT(15) # define PLL_ALPHA_MSB BIT(15)
# define PLL_ALPHA_EN BIT(24) # define PLL_ALPHA_EN BIT(24)
# define PLL_ALPHA_MODE BIT(25) # define PLL_ALPHA_MODE BIT(25)

View File

@ -3123,7 +3123,7 @@ static struct clk_branch gcc_pcie_3_pipe_clk = {
static struct clk_branch gcc_pcie_3_pipediv2_clk = { static struct clk_branch gcc_pcie_3_pipediv2_clk = {
.halt_reg = 0x58060, .halt_reg = 0x58060,
.halt_check = BRANCH_HALT_VOTED, .halt_check = BRANCH_HALT_SKIP,
.clkr = { .clkr = {
.enable_reg = 0x52020, .enable_reg = 0x52020,
.enable_mask = BIT(5), .enable_mask = BIT(5),
@ -3248,7 +3248,7 @@ static struct clk_branch gcc_pcie_4_pipe_clk = {
static struct clk_branch gcc_pcie_4_pipediv2_clk = { static struct clk_branch gcc_pcie_4_pipediv2_clk = {
.halt_reg = 0x6b054, .halt_reg = 0x6b054,
.halt_check = BRANCH_HALT_VOTED, .halt_check = BRANCH_HALT_SKIP,
.clkr = { .clkr = {
.enable_reg = 0x52010, .enable_reg = 0x52010,
.enable_mask = BIT(27), .enable_mask = BIT(27),
@ -3373,7 +3373,7 @@ static struct clk_branch gcc_pcie_5_pipe_clk = {
static struct clk_branch gcc_pcie_5_pipediv2_clk = { static struct clk_branch gcc_pcie_5_pipediv2_clk = {
.halt_reg = 0x2f054, .halt_reg = 0x2f054,
.halt_check = BRANCH_HALT_VOTED, .halt_check = BRANCH_HALT_SKIP,
.clkr = { .clkr = {
.enable_reg = 0x52018, .enable_reg = 0x52018,
.enable_mask = BIT(19), .enable_mask = BIT(19),
@ -3511,7 +3511,7 @@ static struct clk_branch gcc_pcie_6a_pipe_clk = {
static struct clk_branch gcc_pcie_6a_pipediv2_clk = { static struct clk_branch gcc_pcie_6a_pipediv2_clk = {
.halt_reg = 0x31060, .halt_reg = 0x31060,
.halt_check = BRANCH_HALT_VOTED, .halt_check = BRANCH_HALT_SKIP,
.clkr = { .clkr = {
.enable_reg = 0x52018, .enable_reg = 0x52018,
.enable_mask = BIT(28), .enable_mask = BIT(28),
@ -3649,7 +3649,7 @@ static struct clk_branch gcc_pcie_6b_pipe_clk = {
static struct clk_branch gcc_pcie_6b_pipediv2_clk = { static struct clk_branch gcc_pcie_6b_pipediv2_clk = {
.halt_reg = 0x8d060, .halt_reg = 0x8d060,
.halt_check = BRANCH_HALT_VOTED, .halt_check = BRANCH_HALT_SKIP,
.clkr = { .clkr = {
.enable_reg = 0x52010, .enable_reg = 0x52010,
.enable_mask = BIT(28), .enable_mask = BIT(28),
@ -6155,7 +6155,7 @@ static struct gdsc gcc_usb3_mp_ss1_phy_gdsc = {
.pd = { .pd = {
.name = "gcc_usb3_mp_ss1_phy_gdsc", .name = "gcc_usb3_mp_ss1_phy_gdsc",
}, },
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_RET_ON,
.flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE, .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
}; };

View File

@ -452,7 +452,7 @@ static struct gdsc mvs0_gdsc = {
.pd = { .pd = {
.name = "mvs0_gdsc", .name = "mvs0_gdsc",
}, },
.flags = HW_CTRL | RETAIN_FF_ENABLE, .flags = HW_CTRL_TRIGGER | RETAIN_FF_ENABLE,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
}; };
@ -461,7 +461,7 @@ static struct gdsc mvs1_gdsc = {
.pd = { .pd = {
.name = "mvs1_gdsc", .name = "mvs1_gdsc",
}, },
.flags = HW_CTRL | RETAIN_FF_ENABLE, .flags = HW_CTRL_TRIGGER | RETAIN_FF_ENABLE,
.pwrsts = PWRSTS_OFF_ON, .pwrsts = PWRSTS_OFF_ON,
}; };

View File

@ -1028,26 +1028,29 @@ static void hybrid_update_cpu_capacity_scaling(void)
} }
} }
static void __hybrid_init_cpu_capacity_scaling(void) static void __hybrid_refresh_cpu_capacity_scaling(void)
{ {
hybrid_max_perf_cpu = NULL; hybrid_max_perf_cpu = NULL;
hybrid_update_cpu_capacity_scaling(); hybrid_update_cpu_capacity_scaling();
} }
static void hybrid_init_cpu_capacity_scaling(void) static void hybrid_refresh_cpu_capacity_scaling(void)
{ {
bool disable_itmt = false; guard(mutex)(&hybrid_capacity_lock);
mutex_lock(&hybrid_capacity_lock); __hybrid_refresh_cpu_capacity_scaling();
}
static void hybrid_init_cpu_capacity_scaling(bool refresh)
{
/* /*
* If hybrid_max_perf_cpu is set at this point, the hybrid CPU capacity * If hybrid_max_perf_cpu is set at this point, the hybrid CPU capacity
* scaling has been enabled already and the driver is just changing the * scaling has been enabled already and the driver is just changing the
* operation mode. * operation mode.
*/ */
if (hybrid_max_perf_cpu) { if (refresh) {
__hybrid_init_cpu_capacity_scaling(); hybrid_refresh_cpu_capacity_scaling();
goto unlock; return;
} }
/* /*
@ -1056,19 +1059,25 @@ static void hybrid_init_cpu_capacity_scaling(void)
* do not do that when SMT is in use. * do not do that when SMT is in use.
*/ */
if (hwp_is_hybrid && !sched_smt_active() && arch_enable_hybrid_capacity_scale()) { if (hwp_is_hybrid && !sched_smt_active() && arch_enable_hybrid_capacity_scale()) {
__hybrid_init_cpu_capacity_scaling(); hybrid_refresh_cpu_capacity_scaling();
disable_itmt = true; /*
} * Disabling ITMT causes sched domains to be rebuilt to disable asym
* packing and enable asym capacity.
unlock: */
mutex_unlock(&hybrid_capacity_lock);
/*
* Disabling ITMT causes sched domains to be rebuilt to disable asym
* packing and enable asym capacity.
*/
if (disable_itmt)
sched_clear_itmt_support(); sched_clear_itmt_support();
}
}
static bool hybrid_clear_max_perf_cpu(void)
{
bool ret;
guard(mutex)(&hybrid_capacity_lock);
ret = !!hybrid_max_perf_cpu;
hybrid_max_perf_cpu = NULL;
return ret;
} }
static void __intel_pstate_get_hwp_cap(struct cpudata *cpu) static void __intel_pstate_get_hwp_cap(struct cpudata *cpu)
@ -1392,7 +1401,7 @@ static void intel_pstate_update_limits_for_all(void)
mutex_lock(&hybrid_capacity_lock); mutex_lock(&hybrid_capacity_lock);
if (hybrid_max_perf_cpu) if (hybrid_max_perf_cpu)
__hybrid_init_cpu_capacity_scaling(); __hybrid_refresh_cpu_capacity_scaling();
mutex_unlock(&hybrid_capacity_lock); mutex_unlock(&hybrid_capacity_lock);
} }
@ -2263,6 +2272,11 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
} else { } else {
cpu->pstate.scaling = perf_ctl_scaling; cpu->pstate.scaling = perf_ctl_scaling;
} }
/*
* If the CPU is going online for the first time and it was
* offline initially, asym capacity scaling needs to be updated.
*/
hybrid_update_capacity(cpu);
} else { } else {
cpu->pstate.scaling = perf_ctl_scaling; cpu->pstate.scaling = perf_ctl_scaling;
cpu->pstate.max_pstate = pstate_funcs.get_max(cpu->cpu); cpu->pstate.max_pstate = pstate_funcs.get_max(cpu->cpu);
@ -3352,6 +3366,7 @@ static void intel_pstate_driver_cleanup(void)
static int intel_pstate_register_driver(struct cpufreq_driver *driver) static int intel_pstate_register_driver(struct cpufreq_driver *driver)
{ {
bool refresh_cpu_cap_scaling;
int ret; int ret;
if (driver == &intel_pstate) if (driver == &intel_pstate)
@ -3364,6 +3379,8 @@ static int intel_pstate_register_driver(struct cpufreq_driver *driver)
arch_set_max_freq_ratio(global.turbo_disabled); arch_set_max_freq_ratio(global.turbo_disabled);
refresh_cpu_cap_scaling = hybrid_clear_max_perf_cpu();
intel_pstate_driver = driver; intel_pstate_driver = driver;
ret = cpufreq_register_driver(intel_pstate_driver); ret = cpufreq_register_driver(intel_pstate_driver);
if (ret) { if (ret) {
@ -3373,7 +3390,7 @@ static int intel_pstate_register_driver(struct cpufreq_driver *driver)
global.min_perf_pct = min_perf_pct_min(); global.min_perf_pct = min_perf_pct_min();
hybrid_init_cpu_capacity_scaling(); hybrid_init_cpu_capacity_scaling(refresh_cpu_cap_scaling);
return 0; return 0;
} }

View File

@ -16,7 +16,6 @@ static u32 smccc_version = ARM_SMCCC_VERSION_1_0;
static enum arm_smccc_conduit smccc_conduit = SMCCC_CONDUIT_NONE; static enum arm_smccc_conduit smccc_conduit = SMCCC_CONDUIT_NONE;
bool __ro_after_init smccc_trng_available = false; bool __ro_after_init smccc_trng_available = false;
u64 __ro_after_init smccc_has_sve_hint = false;
s32 __ro_after_init smccc_soc_id_version = SMCCC_RET_NOT_SUPPORTED; s32 __ro_after_init smccc_soc_id_version = SMCCC_RET_NOT_SUPPORTED;
s32 __ro_after_init smccc_soc_id_revision = SMCCC_RET_NOT_SUPPORTED; s32 __ro_after_init smccc_soc_id_revision = SMCCC_RET_NOT_SUPPORTED;
@ -28,9 +27,6 @@ void __init arm_smccc_version_init(u32 version, enum arm_smccc_conduit conduit)
smccc_conduit = conduit; smccc_conduit = conduit;
smccc_trng_available = smccc_probe_trng(); smccc_trng_available = smccc_probe_trng();
if (IS_ENABLED(CONFIG_ARM64_SVE) &&
smccc_version >= ARM_SMCCC_VERSION_1_3)
smccc_has_sve_hint = true;
if ((smccc_version >= ARM_SMCCC_VERSION_1_2) && if ((smccc_version >= ARM_SMCCC_VERSION_1_2) &&
(smccc_conduit != SMCCC_CONDUIT_NONE)) { (smccc_conduit != SMCCC_CONDUIT_NONE)) {

View File

@ -172,8 +172,8 @@ static union acpi_object *amdgpu_atif_call(struct amdgpu_atif *atif,
&buffer); &buffer);
obj = (union acpi_object *)buffer.pointer; obj = (union acpi_object *)buffer.pointer;
/* Fail if calling the method fails and ATIF is supported */ /* Fail if calling the method fails */
if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { if (ACPI_FAILURE(status)) {
DRM_DEBUG_DRIVER("failed to evaluate ATIF got %s\n", DRM_DEBUG_DRIVER("failed to evaluate ATIF got %s\n",
acpi_format_exception(status)); acpi_format_exception(status));
kfree(obj); kfree(obj);

View File

@ -402,7 +402,7 @@ static ssize_t amdgpu_debugfs_gprwave_read(struct file *f, char __user *buf, siz
int r; int r;
uint32_t *data, x; uint32_t *data, x;
if (size & 0x3 || *pos & 0x3) if (size > 4096 || size & 0x3 || *pos & 0x3)
return -EINVAL; return -EINVAL;
r = pm_runtime_get_sync(adev_to_drm(adev)->dev); r = pm_runtime_get_sync(adev_to_drm(adev)->dev);
@ -1648,7 +1648,7 @@ int amdgpu_debugfs_regs_init(struct amdgpu_device *adev)
for (i = 0; i < ARRAY_SIZE(debugfs_regs); i++) { for (i = 0; i < ARRAY_SIZE(debugfs_regs); i++) {
ent = debugfs_create_file(debugfs_regs_names[i], ent = debugfs_create_file(debugfs_regs_names[i],
S_IFREG | 0444, root, S_IFREG | 0400, root,
adev, debugfs_regs[i]); adev, debugfs_regs[i]);
if (!i && !IS_ERR_OR_NULL(ent)) if (!i && !IS_ERR_OR_NULL(ent))
i_size_write(ent->d_inode, adev->rmmio_size); i_size_write(ent->d_inode, adev->rmmio_size);
@ -2100,11 +2100,11 @@ int amdgpu_debugfs_init(struct amdgpu_device *adev)
amdgpu_securedisplay_debugfs_init(adev); amdgpu_securedisplay_debugfs_init(adev);
amdgpu_fw_attestation_debugfs_init(adev); amdgpu_fw_attestation_debugfs_init(adev);
debugfs_create_file("amdgpu_evict_vram", 0444, root, adev, debugfs_create_file("amdgpu_evict_vram", 0400, root, adev,
&amdgpu_evict_vram_fops); &amdgpu_evict_vram_fops);
debugfs_create_file("amdgpu_evict_gtt", 0444, root, adev, debugfs_create_file("amdgpu_evict_gtt", 0400, root, adev,
&amdgpu_evict_gtt_fops); &amdgpu_evict_gtt_fops);
debugfs_create_file("amdgpu_test_ib", 0444, root, adev, debugfs_create_file("amdgpu_test_ib", 0400, root, adev,
&amdgpu_debugfs_test_ib_fops); &amdgpu_debugfs_test_ib_fops);
debugfs_create_file("amdgpu_vm_info", 0444, root, adev, debugfs_create_file("amdgpu_vm_info", 0444, root, adev,
&amdgpu_debugfs_vm_info_fops); &amdgpu_debugfs_vm_info_fops);

View File

@ -482,7 +482,7 @@ static bool __aqua_vanjaram_is_valid_mode(struct amdgpu_xcp_mgr *xcp_mgr,
case AMDGPU_SPX_PARTITION_MODE: case AMDGPU_SPX_PARTITION_MODE:
return adev->gmc.num_mem_partitions == 1 && num_xcc > 0; return adev->gmc.num_mem_partitions == 1 && num_xcc > 0;
case AMDGPU_DPX_PARTITION_MODE: case AMDGPU_DPX_PARTITION_MODE:
return adev->gmc.num_mem_partitions != 8 && (num_xcc % 4) == 0; return adev->gmc.num_mem_partitions <= 2 && (num_xcc % 4) == 0;
case AMDGPU_TPX_PARTITION_MODE: case AMDGPU_TPX_PARTITION_MODE:
return (adev->gmc.num_mem_partitions == 1 || return (adev->gmc.num_mem_partitions == 1 ||
adev->gmc.num_mem_partitions == 3) && adev->gmc.num_mem_partitions == 3) &&

View File

@ -9429,6 +9429,7 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
bool mode_set_reset_required = false; bool mode_set_reset_required = false;
u32 i; u32 i;
struct dc_commit_streams_params params = {dc_state->streams, dc_state->stream_count}; struct dc_commit_streams_params params = {dc_state->streams, dc_state->stream_count};
bool set_backlight_level = false;
/* Disable writeback */ /* Disable writeback */
for_each_old_connector_in_state(state, connector, old_con_state, i) { for_each_old_connector_in_state(state, connector, old_con_state, i) {
@ -9548,6 +9549,7 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
acrtc->hw_mode = new_crtc_state->mode; acrtc->hw_mode = new_crtc_state->mode;
crtc->hwmode = new_crtc_state->mode; crtc->hwmode = new_crtc_state->mode;
mode_set_reset_required = true; mode_set_reset_required = true;
set_backlight_level = true;
} else if (modereset_required(new_crtc_state)) { } else if (modereset_required(new_crtc_state)) {
drm_dbg_atomic(dev, drm_dbg_atomic(dev,
"Atomic commit: RESET. crtc id %d:[%p]\n", "Atomic commit: RESET. crtc id %d:[%p]\n",
@ -9599,6 +9601,19 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
acrtc->otg_inst = status->primary_otg_inst; acrtc->otg_inst = status->primary_otg_inst;
} }
} }
/* During boot up and resume the DC layer will reset the panel brightness
* to fix a flicker issue.
* It will cause the dm->actual_brightness is not the current panel brightness
* level. (the dm->brightness is the correct panel level)
* So we set the backlight level with dm->brightness value after set mode
*/
if (set_backlight_level) {
for (i = 0; i < dm->num_of_edps; i++) {
if (dm->backlight_dev[i])
amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);
}
}
} }
static void dm_set_writeback(struct amdgpu_display_manager *dm, static void dm_set_writeback(struct amdgpu_display_manager *dm,

View File

@ -3127,7 +3127,9 @@ static enum bp_result bios_parser_get_vram_info(
struct atom_data_revision revision; struct atom_data_revision revision;
// vram info moved to umc_info for DCN4x // vram info moved to umc_info for DCN4x
if (info && DATA_TABLES(umc_info)) { if (dcb->ctx->dce_version >= DCN_VERSION_4_01 &&
dcb->ctx->dce_version < DCN_VERSION_MAX &&
info && DATA_TABLES(umc_info)) {
header = GET_IMAGE(struct atom_common_table_header, header = GET_IMAGE(struct atom_common_table_header,
DATA_TABLES(umc_info)); DATA_TABLES(umc_info));

View File

@ -1259,26 +1259,33 @@ static int smu_sw_init(void *handle)
smu->watermarks_bitmap = 0; smu->watermarks_bitmap = 0;
smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT; smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
smu->default_power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT; smu->default_power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
smu->user_dpm_profile.user_workload_mask = 0;
atomic_set(&smu->smu_power.power_gate.vcn_gated, 1); atomic_set(&smu->smu_power.power_gate.vcn_gated, 1);
atomic_set(&smu->smu_power.power_gate.jpeg_gated, 1); atomic_set(&smu->smu_power.power_gate.jpeg_gated, 1);
atomic_set(&smu->smu_power.power_gate.vpe_gated, 1); atomic_set(&smu->smu_power.power_gate.vpe_gated, 1);
atomic_set(&smu->smu_power.power_gate.umsch_mm_gated, 1); atomic_set(&smu->smu_power.power_gate.umsch_mm_gated, 1);
smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT] = 0; smu->workload_priority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT] = 0;
smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 1; smu->workload_priority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 1;
smu->workload_prority[PP_SMC_POWER_PROFILE_POWERSAVING] = 2; smu->workload_priority[PP_SMC_POWER_PROFILE_POWERSAVING] = 2;
smu->workload_prority[PP_SMC_POWER_PROFILE_VIDEO] = 3; smu->workload_priority[PP_SMC_POWER_PROFILE_VIDEO] = 3;
smu->workload_prority[PP_SMC_POWER_PROFILE_VR] = 4; smu->workload_priority[PP_SMC_POWER_PROFILE_VR] = 4;
smu->workload_prority[PP_SMC_POWER_PROFILE_COMPUTE] = 5; smu->workload_priority[PP_SMC_POWER_PROFILE_COMPUTE] = 5;
smu->workload_prority[PP_SMC_POWER_PROFILE_CUSTOM] = 6; smu->workload_priority[PP_SMC_POWER_PROFILE_CUSTOM] = 6;
if (smu->is_apu || if (smu->is_apu ||
!smu_is_workload_profile_available(smu, PP_SMC_POWER_PROFILE_FULLSCREEN3D)) !smu_is_workload_profile_available(smu, PP_SMC_POWER_PROFILE_FULLSCREEN3D)) {
smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT]; smu->driver_workload_mask =
else 1 << smu->workload_priority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D]; } else {
smu->driver_workload_mask =
1 << smu->workload_priority[PP_SMC_POWER_PROFILE_FULLSCREEN3D];
smu->default_power_profile_mode = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
}
smu->workload_mask = smu->driver_workload_mask |
smu->user_dpm_profile.user_workload_mask;
smu->workload_setting[0] = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT; smu->workload_setting[0] = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
smu->workload_setting[1] = PP_SMC_POWER_PROFILE_FULLSCREEN3D; smu->workload_setting[1] = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
smu->workload_setting[2] = PP_SMC_POWER_PROFILE_POWERSAVING; smu->workload_setting[2] = PP_SMC_POWER_PROFILE_POWERSAVING;
@ -2348,17 +2355,20 @@ static int smu_switch_power_profile(void *handle,
return -EINVAL; return -EINVAL;
if (!en) { if (!en) {
smu->workload_mask &= ~(1 << smu->workload_prority[type]); smu->driver_workload_mask &= ~(1 << smu->workload_priority[type]);
index = fls(smu->workload_mask); index = fls(smu->workload_mask);
index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0; index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
workload[0] = smu->workload_setting[index]; workload[0] = smu->workload_setting[index];
} else { } else {
smu->workload_mask |= (1 << smu->workload_prority[type]); smu->driver_workload_mask |= (1 << smu->workload_priority[type]);
index = fls(smu->workload_mask); index = fls(smu->workload_mask);
index = index <= WORKLOAD_POLICY_MAX ? index - 1 : 0; index = index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
workload[0] = smu->workload_setting[index]; workload[0] = smu->workload_setting[index];
} }
smu->workload_mask = smu->driver_workload_mask |
smu->user_dpm_profile.user_workload_mask;
if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL && if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
smu_bump_power_profile_mode(smu, workload, 0); smu_bump_power_profile_mode(smu, workload, 0);
@ -3049,12 +3059,23 @@ static int smu_set_power_profile_mode(void *handle,
uint32_t param_size) uint32_t param_size)
{ {
struct smu_context *smu = handle; struct smu_context *smu = handle;
int ret;
if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled || if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled ||
!smu->ppt_funcs->set_power_profile_mode) !smu->ppt_funcs->set_power_profile_mode)
return -EOPNOTSUPP; return -EOPNOTSUPP;
return smu_bump_power_profile_mode(smu, param, param_size); if (smu->user_dpm_profile.user_workload_mask &
(1 << smu->workload_priority[param[param_size]]))
return 0;
smu->user_dpm_profile.user_workload_mask =
(1 << smu->workload_priority[param[param_size]]);
smu->workload_mask = smu->user_dpm_profile.user_workload_mask |
smu->driver_workload_mask;
ret = smu_bump_power_profile_mode(smu, param, param_size);
return ret;
} }
static int smu_get_fan_control_mode(void *handle, u32 *fan_mode) static int smu_get_fan_control_mode(void *handle, u32 *fan_mode)

View File

@ -240,6 +240,7 @@ struct smu_user_dpm_profile {
/* user clock state information */ /* user clock state information */
uint32_t clk_mask[SMU_CLK_COUNT]; uint32_t clk_mask[SMU_CLK_COUNT];
uint32_t clk_dependency; uint32_t clk_dependency;
uint32_t user_workload_mask;
}; };
#define SMU_TABLE_INIT(tables, table_id, s, a, d) \ #define SMU_TABLE_INIT(tables, table_id, s, a, d) \
@ -557,7 +558,8 @@ struct smu_context {
bool disable_uclk_switch; bool disable_uclk_switch;
uint32_t workload_mask; uint32_t workload_mask;
uint32_t workload_prority[WORKLOAD_POLICY_MAX]; uint32_t driver_workload_mask;
uint32_t workload_priority[WORKLOAD_POLICY_MAX];
uint32_t workload_setting[WORKLOAD_POLICY_MAX]; uint32_t workload_setting[WORKLOAD_POLICY_MAX];
uint32_t power_profile_mode; uint32_t power_profile_mode;
uint32_t default_power_profile_mode; uint32_t default_power_profile_mode;

View File

@ -1455,7 +1455,6 @@ static int arcturus_set_power_profile_mode(struct smu_context *smu,
return -EINVAL; return -EINVAL;
} }
if ((profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) && if ((profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) &&
(smu->smc_fw_version >= 0x360d00)) { (smu->smc_fw_version >= 0x360d00)) {
if (size != 10) if (size != 10)
@ -1523,14 +1522,14 @@ static int arcturus_set_power_profile_mode(struct smu_context *smu,
ret = smu_cmn_send_smc_msg_with_param(smu, ret = smu_cmn_send_smc_msg_with_param(smu,
SMU_MSG_SetWorkloadMask, SMU_MSG_SetWorkloadMask,
1 << workload_type, smu->workload_mask,
NULL); NULL);
if (ret) { if (ret) {
dev_err(smu->adev->dev, "Fail to set workload type %d\n", workload_type); dev_err(smu->adev->dev, "Fail to set workload type %d\n", workload_type);
return ret; return ret;
} }
smu->power_profile_mode = profile_mode; smu_cmn_assign_power_profile(smu);
return 0; return 0;
} }

View File

@ -2081,10 +2081,13 @@ static int navi10_set_power_profile_mode(struct smu_context *smu, long *input, u
smu->power_profile_mode); smu->power_profile_mode);
if (workload_type < 0) if (workload_type < 0)
return -EINVAL; return -EINVAL;
ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask, ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
1 << workload_type, NULL); smu->workload_mask, NULL);
if (ret) if (ret)
dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__); dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
else
smu_cmn_assign_power_profile(smu);
return ret; return ret;
} }

View File

@ -1786,10 +1786,13 @@ static int sienna_cichlid_set_power_profile_mode(struct smu_context *smu, long *
smu->power_profile_mode); smu->power_profile_mode);
if (workload_type < 0) if (workload_type < 0)
return -EINVAL; return -EINVAL;
ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask, ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
1 << workload_type, NULL); smu->workload_mask, NULL);
if (ret) if (ret)
dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__); dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
else
smu_cmn_assign_power_profile(smu);
return ret; return ret;
} }

View File

@ -1079,7 +1079,7 @@ static int vangogh_set_power_profile_mode(struct smu_context *smu, long *input,
} }
ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify, ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify,
1 << workload_type, smu->workload_mask,
NULL); NULL);
if (ret) { if (ret) {
dev_err_once(smu->adev->dev, "Fail to set workload type %d\n", dev_err_once(smu->adev->dev, "Fail to set workload type %d\n",
@ -1087,7 +1087,7 @@ static int vangogh_set_power_profile_mode(struct smu_context *smu, long *input,
return ret; return ret;
} }
smu->power_profile_mode = profile_mode; smu_cmn_assign_power_profile(smu);
return 0; return 0;
} }

View File

@ -890,14 +890,14 @@ static int renoir_set_power_profile_mode(struct smu_context *smu, long *input, u
} }
ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify, ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify,
1 << workload_type, smu->workload_mask,
NULL); NULL);
if (ret) { if (ret) {
dev_err_once(smu->adev->dev, "Fail to set workload type %d\n", workload_type); dev_err_once(smu->adev->dev, "Fail to set workload type %d\n", workload_type);
return ret; return ret;
} }
smu->power_profile_mode = profile_mode; smu_cmn_assign_power_profile(smu);
return 0; return 0;
} }

View File

@ -2485,7 +2485,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
DpmActivityMonitorCoeffInt_t *activity_monitor = DpmActivityMonitorCoeffInt_t *activity_monitor =
&(activity_monitor_external.DpmActivityMonitorCoeffInt); &(activity_monitor_external.DpmActivityMonitorCoeffInt);
int workload_type, ret = 0; int workload_type, ret = 0;
u32 workload_mask, selected_workload_mask; u32 workload_mask;
smu->power_profile_mode = input[size]; smu->power_profile_mode = input[size];
@ -2552,7 +2552,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
if (workload_type < 0) if (workload_type < 0)
return -EINVAL; return -EINVAL;
selected_workload_mask = workload_mask = 1 << workload_type; workload_mask = 1 << workload_type;
/* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */ /* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) && if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
@ -2567,12 +2567,22 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
workload_mask |= 1 << workload_type; workload_mask |= 1 << workload_type;
} }
smu->workload_mask |= workload_mask;
ret = smu_cmn_send_smc_msg_with_param(smu, ret = smu_cmn_send_smc_msg_with_param(smu,
SMU_MSG_SetWorkloadMask, SMU_MSG_SetWorkloadMask,
workload_mask, smu->workload_mask,
NULL); NULL);
if (!ret) if (!ret) {
smu->workload_mask = selected_workload_mask; smu_cmn_assign_power_profile(smu);
if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING) {
workload_type = smu_cmn_to_asic_specific_index(smu,
CMN2ASIC_MAPPING_WORKLOAD,
PP_SMC_POWER_PROFILE_FULLSCREEN3D);
smu->power_profile_mode = smu->workload_mask & (1 << workload_type)
? PP_SMC_POWER_PROFILE_FULLSCREEN3D
: PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
}
}
return ret; return ret;
} }

View File

@ -2499,13 +2499,14 @@ static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu, long *inp
smu->power_profile_mode); smu->power_profile_mode);
if (workload_type < 0) if (workload_type < 0)
return -EINVAL; return -EINVAL;
ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask, ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
1 << workload_type, NULL); smu->workload_mask, NULL);
if (ret) if (ret)
dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__); dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
else else
smu->workload_mask = (1 << workload_type); smu_cmn_assign_power_profile(smu);
return ret; return ret;
} }

View File

@ -367,54 +367,6 @@ static int smu_v14_0_2_store_powerplay_table(struct smu_context *smu)
return 0; return 0;
} }
#ifndef atom_smc_dpm_info_table_14_0_0
struct atom_smc_dpm_info_table_14_0_0 {
struct atom_common_table_header table_header;
BoardTable_t BoardTable;
};
#endif
static int smu_v14_0_2_append_powerplay_table(struct smu_context *smu)
{
struct smu_table_context *table_context = &smu->smu_table;
PPTable_t *smc_pptable = table_context->driver_pptable;
struct atom_smc_dpm_info_table_14_0_0 *smc_dpm_table;
BoardTable_t *BoardTable = &smc_pptable->BoardTable;
int index, ret;
index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
smc_dpm_info);
ret = amdgpu_atombios_get_data_table(smu->adev, index, NULL, NULL, NULL,
(uint8_t **)&smc_dpm_table);
if (ret)
return ret;
memcpy(BoardTable, &smc_dpm_table->BoardTable, sizeof(BoardTable_t));
return 0;
}
#if 0
static int smu_v14_0_2_get_pptable_from_pmfw(struct smu_context *smu,
void **table,
uint32_t *size)
{
struct smu_table_context *smu_table = &smu->smu_table;
void *combo_pptable = smu_table->combo_pptable;
int ret = 0;
ret = smu_cmn_get_combo_pptable(smu);
if (ret)
return ret;
*table = combo_pptable;
*size = sizeof(struct smu_14_0_powerplay_table);
return 0;
}
#endif
static int smu_v14_0_2_get_pptable_from_pmfw(struct smu_context *smu, static int smu_v14_0_2_get_pptable_from_pmfw(struct smu_context *smu,
void **table, void **table,
uint32_t *size) uint32_t *size)
@ -436,16 +388,12 @@ static int smu_v14_0_2_get_pptable_from_pmfw(struct smu_context *smu,
static int smu_v14_0_2_setup_pptable(struct smu_context *smu) static int smu_v14_0_2_setup_pptable(struct smu_context *smu)
{ {
struct smu_table_context *smu_table = &smu->smu_table; struct smu_table_context *smu_table = &smu->smu_table;
struct amdgpu_device *adev = smu->adev;
int ret = 0; int ret = 0;
if (amdgpu_sriov_vf(smu->adev)) if (amdgpu_sriov_vf(smu->adev))
return 0; return 0;
if (!adev->scpm_enabled) ret = smu_v14_0_2_get_pptable_from_pmfw(smu,
ret = smu_v14_0_setup_pptable(smu);
else
ret = smu_v14_0_2_get_pptable_from_pmfw(smu,
&smu_table->power_play_table, &smu_table->power_play_table,
&smu_table->power_play_table_size); &smu_table->power_play_table_size);
if (ret) if (ret)
@ -455,16 +403,6 @@ static int smu_v14_0_2_setup_pptable(struct smu_context *smu)
if (ret) if (ret)
return ret; return ret;
/*
* With SCPM enabled, the operation below will be handled
* by PSP. Driver involvment is unnecessary and useless.
*/
if (!adev->scpm_enabled) {
ret = smu_v14_0_2_append_powerplay_table(smu);
if (ret)
return ret;
}
ret = smu_v14_0_2_check_powerplay_table(smu); ret = smu_v14_0_2_check_powerplay_table(smu);
if (ret) if (ret)
return ret; return ret;
@ -1869,12 +1807,11 @@ static int smu_v14_0_2_set_power_profile_mode(struct smu_context *smu,
if (workload_type < 0) if (workload_type < 0)
return -EINVAL; return -EINVAL;
ret = smu_cmn_send_smc_msg_with_param(smu, ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
SMU_MSG_SetWorkloadMask, smu->workload_mask, NULL);
1 << workload_type,
NULL);
if (!ret) if (!ret)
smu->workload_mask = 1 << workload_type; smu_cmn_assign_power_profile(smu);
return ret; return ret;
} }
@ -2799,7 +2736,6 @@ static const struct pptable_funcs smu_v14_0_2_ppt_funcs = {
.check_fw_status = smu_v14_0_check_fw_status, .check_fw_status = smu_v14_0_check_fw_status,
.setup_pptable = smu_v14_0_2_setup_pptable, .setup_pptable = smu_v14_0_2_setup_pptable,
.check_fw_version = smu_v14_0_check_fw_version, .check_fw_version = smu_v14_0_check_fw_version,
.write_pptable = smu_cmn_write_pptable,
.set_driver_table_location = smu_v14_0_set_driver_table_location, .set_driver_table_location = smu_v14_0_set_driver_table_location,
.system_features_control = smu_v14_0_system_features_control, .system_features_control = smu_v14_0_system_features_control,
.set_allowed_mask = smu_v14_0_set_allowed_mask, .set_allowed_mask = smu_v14_0_set_allowed_mask,

View File

@ -1138,6 +1138,14 @@ int smu_cmn_set_mp1_state(struct smu_context *smu,
return ret; return ret;
} }
void smu_cmn_assign_power_profile(struct smu_context *smu)
{
uint32_t index;
index = fls(smu->workload_mask);
index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
smu->power_profile_mode = smu->workload_setting[index];
}
bool smu_cmn_is_audio_func_enabled(struct amdgpu_device *adev) bool smu_cmn_is_audio_func_enabled(struct amdgpu_device *adev)
{ {
struct pci_dev *p = NULL; struct pci_dev *p = NULL;

View File

@ -130,6 +130,8 @@ void smu_cmn_init_soft_gpu_metrics(void *table, uint8_t frev, uint8_t crev);
int smu_cmn_set_mp1_state(struct smu_context *smu, int smu_cmn_set_mp1_state(struct smu_context *smu,
enum pp_mp1_state mp1_state); enum pp_mp1_state mp1_state);
void smu_cmn_assign_power_profile(struct smu_context *smu);
/* /*
* Helper function to make sysfs_emit_at() happy. Align buf to * Helper function to make sysfs_emit_at() happy. Align buf to
* the current page boundary and record the offset. * the current page boundary and record the offset.

View File

@ -403,7 +403,6 @@ static const struct dmi_system_id orientation_data[] = {
}, { /* Lenovo Yoga Tab 3 X90F */ }, { /* Lenovo Yoga Tab 3 X90F */
.matches = { .matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"), DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
DMI_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"), DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"),
}, },
.driver_data = (void *)&lcd1600x2560_rightside_up, .driver_data = (void *)&lcd1600x2560_rightside_up,

View File

@ -17,10 +17,14 @@
#include <drm/drm_auth.h> #include <drm/drm_auth.h>
#include <drm/drm_managed.h> #include <drm/drm_managed.h>
#include <linux/bug.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/list.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/xarray.h> #include <linux/xarray.h>
@ -354,6 +358,10 @@ int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_co
return err; return err;
} }
spin_lock(&pvr_dev->ctx_list_lock);
list_add_tail(&ctx->file_link, &pvr_file->contexts);
spin_unlock(&pvr_dev->ctx_list_lock);
return 0; return 0;
err_destroy_fw_obj: err_destroy_fw_obj:
@ -380,6 +388,11 @@ pvr_context_release(struct kref *ref_count)
container_of(ref_count, struct pvr_context, ref_count); container_of(ref_count, struct pvr_context, ref_count);
struct pvr_device *pvr_dev = ctx->pvr_dev; struct pvr_device *pvr_dev = ctx->pvr_dev;
WARN_ON(in_interrupt());
spin_lock(&pvr_dev->ctx_list_lock);
list_del(&ctx->file_link);
spin_unlock(&pvr_dev->ctx_list_lock);
xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id); xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
pvr_context_destroy_queues(ctx); pvr_context_destroy_queues(ctx);
pvr_fw_object_destroy(ctx->fw_obj); pvr_fw_object_destroy(ctx->fw_obj);
@ -437,11 +450,30 @@ pvr_context_destroy(struct pvr_file *pvr_file, u32 handle)
*/ */
void pvr_destroy_contexts_for_file(struct pvr_file *pvr_file) void pvr_destroy_contexts_for_file(struct pvr_file *pvr_file)
{ {
struct pvr_device *pvr_dev = pvr_file->pvr_dev;
struct pvr_context *ctx; struct pvr_context *ctx;
unsigned long handle; unsigned long handle;
xa_for_each(&pvr_file->ctx_handles, handle, ctx) xa_for_each(&pvr_file->ctx_handles, handle, ctx)
pvr_context_destroy(pvr_file, handle); pvr_context_destroy(pvr_file, handle);
spin_lock(&pvr_dev->ctx_list_lock);
ctx = list_first_entry(&pvr_file->contexts, struct pvr_context, file_link);
while (!list_entry_is_head(ctx, &pvr_file->contexts, file_link)) {
list_del_init(&ctx->file_link);
if (pvr_context_get_if_referenced(ctx)) {
spin_unlock(&pvr_dev->ctx_list_lock);
pvr_vm_unmap_all(ctx->vm_ctx);
pvr_context_put(ctx);
spin_lock(&pvr_dev->ctx_list_lock);
}
ctx = list_first_entry(&pvr_file->contexts, struct pvr_context, file_link);
}
spin_unlock(&pvr_dev->ctx_list_lock);
} }
/** /**
@ -451,6 +483,7 @@ void pvr_destroy_contexts_for_file(struct pvr_file *pvr_file)
void pvr_context_device_init(struct pvr_device *pvr_dev) void pvr_context_device_init(struct pvr_device *pvr_dev)
{ {
xa_init_flags(&pvr_dev->ctx_ids, XA_FLAGS_ALLOC1); xa_init_flags(&pvr_dev->ctx_ids, XA_FLAGS_ALLOC1);
spin_lock_init(&pvr_dev->ctx_list_lock);
} }
/** /**

View File

@ -85,6 +85,9 @@ struct pvr_context {
/** @compute: Transfer queue. */ /** @compute: Transfer queue. */
struct pvr_queue *transfer; struct pvr_queue *transfer;
} queues; } queues;
/** @file_link: pvr_file PVR context list link. */
struct list_head file_link;
}; };
static __always_inline struct pvr_queue * static __always_inline struct pvr_queue *
@ -123,6 +126,24 @@ pvr_context_get(struct pvr_context *ctx)
return ctx; return ctx;
} }
/**
* pvr_context_get_if_referenced() - Take an additional reference on a still
* referenced context.
* @ctx: Context pointer.
*
* Call pvr_context_put() to release.
*
* Returns:
* * True on success, or
* * false if no context pointer passed, or the context wasn't still
* * referenced.
*/
static __always_inline bool
pvr_context_get_if_referenced(struct pvr_context *ctx)
{
return ctx != NULL && kref_get_unless_zero(&ctx->ref_count) != 0;
}
/** /**
* pvr_context_lookup() - Lookup context pointer from handle and file. * pvr_context_lookup() - Lookup context pointer from handle and file.
* @pvr_file: Pointer to pvr_file structure. * @pvr_file: Pointer to pvr_file structure.

View File

@ -23,6 +23,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/math.h> #include <linux/math.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/spinlock_types.h>
#include <linux/timer.h> #include <linux/timer.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/wait.h> #include <linux/wait.h>
@ -293,6 +294,12 @@ struct pvr_device {
/** @sched_wq: Workqueue for schedulers. */ /** @sched_wq: Workqueue for schedulers. */
struct workqueue_struct *sched_wq; struct workqueue_struct *sched_wq;
/**
* @ctx_list_lock: Lock to be held when accessing the context list in
* struct pvr_file.
*/
spinlock_t ctx_list_lock;
}; };
/** /**
@ -344,6 +351,9 @@ struct pvr_file {
* This array is used to allocate handles returned to userspace. * This array is used to allocate handles returned to userspace.
*/ */
struct xarray vm_ctx_handles; struct xarray vm_ctx_handles;
/** @contexts: PVR context list. */
struct list_head contexts;
}; };
/** /**

View File

@ -28,6 +28,7 @@
#include <linux/export.h> #include <linux/export.h>
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/list.h>
#include <linux/mod_devicetable.h> #include <linux/mod_devicetable.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
@ -1326,6 +1327,8 @@ pvr_drm_driver_open(struct drm_device *drm_dev, struct drm_file *file)
*/ */
pvr_file->pvr_dev = pvr_dev; pvr_file->pvr_dev = pvr_dev;
INIT_LIST_HEAD(&pvr_file->contexts);
xa_init_flags(&pvr_file->ctx_handles, XA_FLAGS_ALLOC1); xa_init_flags(&pvr_file->ctx_handles, XA_FLAGS_ALLOC1);
xa_init_flags(&pvr_file->free_list_handles, XA_FLAGS_ALLOC1); xa_init_flags(&pvr_file->free_list_handles, XA_FLAGS_ALLOC1);
xa_init_flags(&pvr_file->hwrt_handles, XA_FLAGS_ALLOC1); xa_init_flags(&pvr_file->hwrt_handles, XA_FLAGS_ALLOC1);

View File

@ -14,6 +14,7 @@
#include <drm/drm_gem.h> #include <drm/drm_gem.h>
#include <drm/drm_gpuvm.h> #include <drm/drm_gpuvm.h>
#include <linux/bug.h>
#include <linux/container_of.h> #include <linux/container_of.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/errno.h> #include <linux/errno.h>
@ -597,12 +598,26 @@ pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
} }
/** /**
* pvr_vm_context_release() - Teardown a VM context. * pvr_vm_unmap_all() - Unmap all mappings associated with a VM context.
* @ref_count: Pointer to reference counter of the VM context. * @vm_ctx: Target VM context.
* *
* This function ensures that no mappings are left dangling by unmapping them * This function ensures that no mappings are left dangling by unmapping them
* all in order of ascending device-virtual address. * all in order of ascending device-virtual address.
*/ */
void
pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx)
{
WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuvm_mgr.mm_start,
vm_ctx->gpuvm_mgr.mm_range));
}
/**
* pvr_vm_context_release() - Teardown a VM context.
* @ref_count: Pointer to reference counter of the VM context.
*
* This function also ensures that no mappings are left dangling by calling
* pvr_vm_unmap_all.
*/
static void static void
pvr_vm_context_release(struct kref *ref_count) pvr_vm_context_release(struct kref *ref_count)
{ {
@ -612,8 +627,7 @@ pvr_vm_context_release(struct kref *ref_count)
if (vm_ctx->fw_mem_ctx_obj) if (vm_ctx->fw_mem_ctx_obj)
pvr_fw_object_destroy(vm_ctx->fw_mem_ctx_obj); pvr_fw_object_destroy(vm_ctx->fw_mem_ctx_obj);
WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuvm_mgr.mm_start, pvr_vm_unmap_all(vm_ctx);
vm_ctx->gpuvm_mgr.mm_range));
pvr_mmu_context_destroy(vm_ctx->mmu_ctx); pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
drm_gem_private_object_fini(&vm_ctx->dummy_gem); drm_gem_private_object_fini(&vm_ctx->dummy_gem);

View File

@ -39,6 +39,7 @@ int pvr_vm_map(struct pvr_vm_context *vm_ctx,
struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset, struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
u64 device_addr, u64 size); u64 device_addr, u64 size);
int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size); int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
void pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx);
dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx); dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx);
struct dma_resv *pvr_vm_get_dma_resv(struct pvr_vm_context *vm_ctx); struct dma_resv *pvr_vm_get_dma_resv(struct pvr_vm_context *vm_ctx);

View File

@ -390,11 +390,15 @@ int panthor_device_mmap_io(struct panthor_device *ptdev, struct vm_area_struct *
{ {
u64 offset = (u64)vma->vm_pgoff << PAGE_SHIFT; u64 offset = (u64)vma->vm_pgoff << PAGE_SHIFT;
if ((vma->vm_flags & VM_SHARED) == 0)
return -EINVAL;
switch (offset) { switch (offset) {
case DRM_PANTHOR_USER_FLUSH_ID_MMIO_OFFSET: case DRM_PANTHOR_USER_FLUSH_ID_MMIO_OFFSET:
if (vma->vm_end - vma->vm_start != PAGE_SIZE || if (vma->vm_end - vma->vm_start != PAGE_SIZE ||
(vma->vm_flags & (VM_WRITE | VM_EXEC))) (vma->vm_flags & (VM_WRITE | VM_EXEC)))
return -EINVAL; return -EINVAL;
vm_flags_clear(vma, VM_MAYWRITE);
break; break;

View File

@ -1580,7 +1580,9 @@ panthor_vm_pool_get_vm(struct panthor_vm_pool *pool, u32 handle)
{ {
struct panthor_vm *vm; struct panthor_vm *vm;
xa_lock(&pool->xa);
vm = panthor_vm_get(xa_load(&pool->xa, handle)); vm = panthor_vm_get(xa_load(&pool->xa, handle));
xa_unlock(&pool->xa);
return vm; return vm;
} }

View File

@ -517,7 +517,7 @@
* [4-6] RSVD * [4-6] RSVD
* [7] Disabled * [7] Disabled
*/ */
#define CCS_MODE XE_REG(0x14804) #define CCS_MODE XE_REG(0x14804, XE_REG_OPTION_MASKED)
#define CCS_MODE_CSLICE_0_3_MASK REG_GENMASK(11, 0) /* 3 bits per cslice */ #define CCS_MODE_CSLICE_0_3_MASK REG_GENMASK(11, 0) /* 3 bits per cslice */
#define CCS_MODE_CSLICE_MASK 0x7 /* CCS0-3 + rsvd */ #define CCS_MODE_CSLICE_MASK 0x7 /* CCS0-3 + rsvd */
#define CCS_MODE_CSLICE_WIDTH ilog2(CCS_MODE_CSLICE_MASK + 1) #define CCS_MODE_CSLICE_WIDTH ilog2(CCS_MODE_CSLICE_MASK + 1)

View File

@ -87,10 +87,6 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
mutex_init(&xef->exec_queue.lock); mutex_init(&xef->exec_queue.lock);
xa_init_flags(&xef->exec_queue.xa, XA_FLAGS_ALLOC1); xa_init_flags(&xef->exec_queue.xa, XA_FLAGS_ALLOC1);
spin_lock(&xe->clients.lock);
xe->clients.count++;
spin_unlock(&xe->clients.lock);
file->driver_priv = xef; file->driver_priv = xef;
kref_init(&xef->refcount); kref_init(&xef->refcount);
@ -107,17 +103,12 @@ static int xe_file_open(struct drm_device *dev, struct drm_file *file)
static void xe_file_destroy(struct kref *ref) static void xe_file_destroy(struct kref *ref)
{ {
struct xe_file *xef = container_of(ref, struct xe_file, refcount); struct xe_file *xef = container_of(ref, struct xe_file, refcount);
struct xe_device *xe = xef->xe;
xa_destroy(&xef->exec_queue.xa); xa_destroy(&xef->exec_queue.xa);
mutex_destroy(&xef->exec_queue.lock); mutex_destroy(&xef->exec_queue.lock);
xa_destroy(&xef->vm.xa); xa_destroy(&xef->vm.xa);
mutex_destroy(&xef->vm.lock); mutex_destroy(&xef->vm.lock);
spin_lock(&xe->clients.lock);
xe->clients.count--;
spin_unlock(&xe->clients.lock);
xe_drm_client_put(xef->client); xe_drm_client_put(xef->client);
kfree(xef->process_name); kfree(xef->process_name);
kfree(xef); kfree(xef);
@ -333,7 +324,6 @@ struct xe_device *xe_device_create(struct pci_dev *pdev,
xe->info.force_execlist = xe_modparam.force_execlist; xe->info.force_execlist = xe_modparam.force_execlist;
spin_lock_init(&xe->irq.lock); spin_lock_init(&xe->irq.lock);
spin_lock_init(&xe->clients.lock);
init_waitqueue_head(&xe->ufence_wq); init_waitqueue_head(&xe->ufence_wq);

View File

@ -178,4 +178,18 @@ void xe_device_declare_wedged(struct xe_device *xe);
struct xe_file *xe_file_get(struct xe_file *xef); struct xe_file *xe_file_get(struct xe_file *xef);
void xe_file_put(struct xe_file *xef); void xe_file_put(struct xe_file *xef);
/*
* Occasionally it is seen that the G2H worker starts running after a delay of more than
* a second even after being queued and activated by the Linux workqueue subsystem. This
* leads to G2H timeout error. The root cause of issue lies with scheduling latency of
* Lunarlake Hybrid CPU. Issue disappears if we disable Lunarlake atom cores from BIOS
* and this is beyond xe kmd.
*
* TODO: Drop this change once workqueue scheduling delay issue is fixed on LNL Hybrid CPU.
*/
#define LNL_FLUSH_WORKQUEUE(wq__) \
flush_workqueue(wq__)
#define LNL_FLUSH_WORK(wrk__) \
flush_work(wrk__)
#endif #endif

View File

@ -353,15 +353,6 @@ struct xe_device {
struct workqueue_struct *wq; struct workqueue_struct *wq;
} sriov; } sriov;
/** @clients: drm clients info */
struct {
/** @clients.lock: Protects drm clients info */
spinlock_t lock;
/** @clients.count: number of drm clients */
u64 count;
} clients;
/** @usm: unified memory state */ /** @usm: unified memory state */
struct { struct {
/** @usm.asid: convert a ASID to VM */ /** @usm.asid: convert a ASID to VM */

View File

@ -132,12 +132,16 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
if (XE_IOCTL_DBG(xe, !q)) if (XE_IOCTL_DBG(xe, !q))
return -ENOENT; return -ENOENT;
if (XE_IOCTL_DBG(xe, q->flags & EXEC_QUEUE_FLAG_VM)) if (XE_IOCTL_DBG(xe, q->flags & EXEC_QUEUE_FLAG_VM)) {
return -EINVAL; err = -EINVAL;
goto err_exec_queue;
}
if (XE_IOCTL_DBG(xe, args->num_batch_buffer && if (XE_IOCTL_DBG(xe, args->num_batch_buffer &&
q->width != args->num_batch_buffer)) q->width != args->num_batch_buffer)) {
return -EINVAL; err = -EINVAL;
goto err_exec_queue;
}
if (XE_IOCTL_DBG(xe, q->ops->reset_status(q))) { if (XE_IOCTL_DBG(xe, q->ops->reset_status(q))) {
err = -ECANCELED; err = -ECANCELED;
@ -220,6 +224,7 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
fence = xe_sync_in_fence_get(syncs, num_syncs, q, vm); fence = xe_sync_in_fence_get(syncs, num_syncs, q, vm);
if (IS_ERR(fence)) { if (IS_ERR(fence)) {
err = PTR_ERR(fence); err = PTR_ERR(fence);
xe_vm_unlock(vm);
goto err_unlock_list; goto err_unlock_list;
} }
for (i = 0; i < num_syncs; i++) for (i = 0; i < num_syncs; i++)

View File

@ -260,8 +260,14 @@ void xe_exec_queue_fini(struct xe_exec_queue *q)
{ {
int i; int i;
/*
* Before releasing our ref to lrc and xef, accumulate our run ticks
*/
xe_exec_queue_update_run_ticks(q);
for (i = 0; i < q->width; ++i) for (i = 0; i < q->width; ++i)
xe_lrc_put(q->lrc[i]); xe_lrc_put(q->lrc[i]);
__xe_exec_queue_free(q); __xe_exec_queue_free(q);
} }

View File

@ -68,6 +68,12 @@ static void __xe_gt_apply_ccs_mode(struct xe_gt *gt, u32 num_engines)
} }
} }
/*
* Mask bits need to be set for the register. Though only Xe2+
* platforms require setting of mask bits, it won't harm for older
* platforms as these bits are unused there.
*/
mode |= CCS_MODE_CSLICE_0_3_MASK << 16;
xe_mmio_write32(gt, CCS_MODE, mode); xe_mmio_write32(gt, CCS_MODE, mode);
xe_gt_dbg(gt, "CCS_MODE=%x config:%08x, num_engines:%d, num_slices:%d\n", xe_gt_dbg(gt, "CCS_MODE=%x config:%08x, num_engines:%d, num_slices:%d\n",
@ -133,9 +139,10 @@ ccs_mode_store(struct device *kdev, struct device_attribute *attr,
} }
/* CCS mode can only be updated when there are no drm clients */ /* CCS mode can only be updated when there are no drm clients */
spin_lock(&xe->clients.lock); mutex_lock(&xe->drm.filelist_mutex);
if (xe->clients.count) { if (!list_empty(&xe->drm.filelist)) {
spin_unlock(&xe->clients.lock); mutex_unlock(&xe->drm.filelist_mutex);
xe_gt_dbg(gt, "Rejecting compute mode change as there are active drm clients\n");
return -EBUSY; return -EBUSY;
} }
@ -146,7 +153,7 @@ ccs_mode_store(struct device *kdev, struct device_attribute *attr,
xe_gt_reset_async(gt); xe_gt_reset_async(gt);
} }
spin_unlock(&xe->clients.lock); mutex_unlock(&xe->drm.filelist_mutex);
return count; return count;
} }

View File

@ -387,6 +387,8 @@ static void pf_release_ggtt(struct xe_tile *tile, struct xe_ggtt_node *node)
* the xe_ggtt_clear() called by below xe_ggtt_remove_node(). * the xe_ggtt_clear() called by below xe_ggtt_remove_node().
*/ */
xe_ggtt_node_remove(node, false); xe_ggtt_node_remove(node, false);
} else {
xe_ggtt_node_fini(node);
} }
} }
@ -442,7 +444,7 @@ static int pf_provision_vf_ggtt(struct xe_gt *gt, unsigned int vfid, u64 size)
config->ggtt_region = node; config->ggtt_region = node;
return 0; return 0;
err: err:
xe_ggtt_node_fini(node); pf_release_ggtt(tile, node);
return err; return err;
} }

View File

@ -72,6 +72,8 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
struct xe_device *xe = gt_to_xe(gt); struct xe_device *xe = gt_to_xe(gt);
struct xe_gt_tlb_invalidation_fence *fence, *next; struct xe_gt_tlb_invalidation_fence *fence, *next;
LNL_FLUSH_WORK(&gt->uc.guc.ct.g2h_worker);
spin_lock_irq(&gt->tlb_invalidation.pending_lock); spin_lock_irq(&gt->tlb_invalidation.pending_lock);
list_for_each_entry_safe(fence, next, list_for_each_entry_safe(fence, next,
&gt->tlb_invalidation.pending_fences, link) { &gt->tlb_invalidation.pending_fences, link) {

View File

@ -897,17 +897,8 @@ static int guc_ct_send_recv(struct xe_guc_ct *ct, const u32 *action, u32 len,
ret = wait_event_timeout(ct->g2h_fence_wq, g2h_fence.done, HZ); ret = wait_event_timeout(ct->g2h_fence_wq, g2h_fence.done, HZ);
/*
* Occasionally it is seen that the G2H worker starts running after a delay of more than
* a second even after being queued and activated by the Linux workqueue subsystem. This
* leads to G2H timeout error. The root cause of issue lies with scheduling latency of
* Lunarlake Hybrid CPU. Issue dissappears if we disable Lunarlake atom cores from BIOS
* and this is beyond xe kmd.
*
* TODO: Drop this change once workqueue scheduling delay issue is fixed on LNL Hybrid CPU.
*/
if (!ret) { if (!ret) {
flush_work(&ct->g2h_worker); LNL_FLUSH_WORK(&ct->g2h_worker);
if (g2h_fence.done) { if (g2h_fence.done) {
xe_gt_warn(gt, "G2H fence %u, action %04x, done\n", xe_gt_warn(gt, "G2H fence %u, action %04x, done\n",
g2h_fence.seqno, action[0]); g2h_fence.seqno, action[0]);

View File

@ -745,8 +745,6 @@ static void guc_exec_queue_free_job(struct drm_sched_job *drm_job)
{ {
struct xe_sched_job *job = to_xe_sched_job(drm_job); struct xe_sched_job *job = to_xe_sched_job(drm_job);
xe_exec_queue_update_run_ticks(job->q);
trace_xe_sched_job_free(job); trace_xe_sched_job_free(job);
xe_sched_job_put(job); xe_sched_job_put(job);
} }

View File

@ -155,6 +155,13 @@ int xe_wait_user_fence_ioctl(struct drm_device *dev, void *data,
} }
if (!timeout) { if (!timeout) {
LNL_FLUSH_WORKQUEUE(xe->ordered_wq);
err = do_compare(addr, args->value, args->mask,
args->op);
if (err <= 0) {
drm_dbg(&xe->drm, "LNL_FLUSH_WORKQUEUE resolved ufence timeout\n");
break;
}
err = -ETIME; err = -ETIME;
break; break;
} }

View File

@ -524,7 +524,7 @@ int i2c_dw_set_sda_hold(struct dw_i2c_dev *dev)
void __i2c_dw_disable(struct dw_i2c_dev *dev) void __i2c_dw_disable(struct dw_i2c_dev *dev)
{ {
struct i2c_timings *t = &dev->timings; struct i2c_timings *t = &dev->timings;
unsigned int raw_intr_stats; unsigned int raw_intr_stats, ic_stats;
unsigned int enable; unsigned int enable;
int timeout = 100; int timeout = 100;
bool abort_needed; bool abort_needed;
@ -532,9 +532,11 @@ void __i2c_dw_disable(struct dw_i2c_dev *dev)
int ret; int ret;
regmap_read(dev->map, DW_IC_RAW_INTR_STAT, &raw_intr_stats); regmap_read(dev->map, DW_IC_RAW_INTR_STAT, &raw_intr_stats);
regmap_read(dev->map, DW_IC_STATUS, &ic_stats);
regmap_read(dev->map, DW_IC_ENABLE, &enable); regmap_read(dev->map, DW_IC_ENABLE, &enable);
abort_needed = raw_intr_stats & DW_IC_INTR_MST_ON_HOLD; abort_needed = (raw_intr_stats & DW_IC_INTR_MST_ON_HOLD) ||
(ic_stats & DW_IC_STATUS_MASTER_HOLD_TX_FIFO_EMPTY);
if (abort_needed) { if (abort_needed) {
if (!(enable & DW_IC_ENABLE_ENABLE)) { if (!(enable & DW_IC_ENABLE_ENABLE)) {
regmap_write(dev->map, DW_IC_ENABLE, DW_IC_ENABLE_ENABLE); regmap_write(dev->map, DW_IC_ENABLE, DW_IC_ENABLE_ENABLE);

View File

@ -116,6 +116,7 @@
#define DW_IC_STATUS_RFNE BIT(3) #define DW_IC_STATUS_RFNE BIT(3)
#define DW_IC_STATUS_MASTER_ACTIVITY BIT(5) #define DW_IC_STATUS_MASTER_ACTIVITY BIT(5)
#define DW_IC_STATUS_SLAVE_ACTIVITY BIT(6) #define DW_IC_STATUS_SLAVE_ACTIVITY BIT(6)
#define DW_IC_STATUS_MASTER_HOLD_TX_FIFO_EMPTY BIT(7)
#define DW_IC_SDA_HOLD_RX_SHIFT 16 #define DW_IC_SDA_HOLD_RX_SHIFT 16
#define DW_IC_SDA_HOLD_RX_MASK GENMASK(23, 16) #define DW_IC_SDA_HOLD_RX_MASK GENMASK(23, 16)

View File

@ -66,8 +66,8 @@ static int mule_i2c_mux_probe(struct platform_device *pdev)
priv = i2c_mux_priv(muxc); priv = i2c_mux_priv(muxc);
priv->regmap = dev_get_regmap(mux_dev->parent, NULL); priv->regmap = dev_get_regmap(mux_dev->parent, NULL);
if (IS_ERR(priv->regmap)) if (!priv->regmap)
return dev_err_probe(mux_dev, PTR_ERR(priv->regmap), return dev_err_probe(mux_dev, -ENODEV,
"No parent i2c register map\n"); "No parent i2c register map\n");
platform_set_drvdata(pdev, muxc); platform_set_drvdata(pdev, muxc);

View File

@ -524,6 +524,13 @@ static int gic_irq_set_irqchip_state(struct irq_data *d,
} }
gic_poke_irq(d, reg); gic_poke_irq(d, reg);
/*
* Force read-back to guarantee that the active state has taken
* effect, and won't race with a guest-driven deactivation.
*/
if (reg == GICD_ISACTIVER)
gic_peek_irq(d, reg);
return 0; return 0;
} }

View File

@ -2471,7 +2471,8 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
int r; int r;
unsigned int num_locks; unsigned int num_locks;
struct dm_bufio_client *c; struct dm_bufio_client *c;
char slab_name[27]; char slab_name[64];
static atomic_t seqno = ATOMIC_INIT(0);
if (!block_size || block_size & ((1 << SECTOR_SHIFT) - 1)) { if (!block_size || block_size & ((1 << SECTOR_SHIFT) - 1)) {
DMERR("%s: block size not specified or is not multiple of 512b", __func__); DMERR("%s: block size not specified or is not multiple of 512b", __func__);
@ -2522,7 +2523,8 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
(block_size < PAGE_SIZE || !is_power_of_2(block_size))) { (block_size < PAGE_SIZE || !is_power_of_2(block_size))) {
unsigned int align = min(1U << __ffs(block_size), (unsigned int)PAGE_SIZE); unsigned int align = min(1U << __ffs(block_size), (unsigned int)PAGE_SIZE);
snprintf(slab_name, sizeof(slab_name), "dm_bufio_cache-%u", block_size); snprintf(slab_name, sizeof(slab_name), "dm_bufio_cache-%u-%u",
block_size, atomic_inc_return(&seqno));
c->slab_cache = kmem_cache_create(slab_name, block_size, align, c->slab_cache = kmem_cache_create(slab_name, block_size, align,
SLAB_RECLAIM_ACCOUNT, NULL); SLAB_RECLAIM_ACCOUNT, NULL);
if (!c->slab_cache) { if (!c->slab_cache) {
@ -2531,9 +2533,11 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
} }
} }
if (aux_size) if (aux_size)
snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer-%u", aux_size); snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer-%u-%u",
aux_size, atomic_inc_return(&seqno));
else else
snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer"); snprintf(slab_name, sizeof(slab_name), "dm_bufio_buffer-%u",
atomic_inc_return(&seqno));
c->slab_buffer = kmem_cache_create(slab_name, sizeof(struct dm_buffer) + aux_size, c->slab_buffer = kmem_cache_create(slab_name, sizeof(struct dm_buffer) + aux_size,
0, SLAB_RECLAIM_ACCOUNT, NULL); 0, SLAB_RECLAIM_ACCOUNT, NULL);
if (!c->slab_buffer) { if (!c->slab_buffer) {

View File

@ -11,12 +11,6 @@
#define DM_MSG_PREFIX "dm-background-tracker" #define DM_MSG_PREFIX "dm-background-tracker"
struct bt_work {
struct list_head list;
struct rb_node node;
struct policy_work work;
};
struct background_tracker { struct background_tracker {
unsigned int max_work; unsigned int max_work;
atomic_t pending_promotes; atomic_t pending_promotes;
@ -26,10 +20,10 @@ struct background_tracker {
struct list_head issued; struct list_head issued;
struct list_head queued; struct list_head queued;
struct rb_root pending; struct rb_root pending;
struct kmem_cache *work_cache;
}; };
struct kmem_cache *btracker_work_cache = NULL;
struct background_tracker *btracker_create(unsigned int max_work) struct background_tracker *btracker_create(unsigned int max_work)
{ {
struct background_tracker *b = kmalloc(sizeof(*b), GFP_KERNEL); struct background_tracker *b = kmalloc(sizeof(*b), GFP_KERNEL);
@ -48,12 +42,6 @@ struct background_tracker *btracker_create(unsigned int max_work)
INIT_LIST_HEAD(&b->queued); INIT_LIST_HEAD(&b->queued);
b->pending = RB_ROOT; b->pending = RB_ROOT;
b->work_cache = KMEM_CACHE(bt_work, 0);
if (!b->work_cache) {
DMERR("couldn't create mempool for background work items");
kfree(b);
b = NULL;
}
return b; return b;
} }
@ -66,10 +54,9 @@ void btracker_destroy(struct background_tracker *b)
BUG_ON(!list_empty(&b->issued)); BUG_ON(!list_empty(&b->issued));
list_for_each_entry_safe (w, tmp, &b->queued, list) { list_for_each_entry_safe (w, tmp, &b->queued, list) {
list_del(&w->list); list_del(&w->list);
kmem_cache_free(b->work_cache, w); kmem_cache_free(btracker_work_cache, w);
} }
kmem_cache_destroy(b->work_cache);
kfree(b); kfree(b);
} }
EXPORT_SYMBOL_GPL(btracker_destroy); EXPORT_SYMBOL_GPL(btracker_destroy);
@ -180,7 +167,7 @@ static struct bt_work *alloc_work(struct background_tracker *b)
if (max_work_reached(b)) if (max_work_reached(b))
return NULL; return NULL;
return kmem_cache_alloc(b->work_cache, GFP_NOWAIT); return kmem_cache_alloc(btracker_work_cache, GFP_NOWAIT);
} }
int btracker_queue(struct background_tracker *b, int btracker_queue(struct background_tracker *b,
@ -203,7 +190,7 @@ int btracker_queue(struct background_tracker *b,
* There was a race, we'll just ignore this second * There was a race, we'll just ignore this second
* bit of work for the same oblock. * bit of work for the same oblock.
*/ */
kmem_cache_free(b->work_cache, w); kmem_cache_free(btracker_work_cache, w);
return -EINVAL; return -EINVAL;
} }
@ -244,7 +231,7 @@ void btracker_complete(struct background_tracker *b,
update_stats(b, &w->work, -1); update_stats(b, &w->work, -1);
rb_erase(&w->node, &b->pending); rb_erase(&w->node, &b->pending);
list_del(&w->list); list_del(&w->list);
kmem_cache_free(b->work_cache, w); kmem_cache_free(btracker_work_cache, w);
} }
EXPORT_SYMBOL_GPL(btracker_complete); EXPORT_SYMBOL_GPL(btracker_complete);

View File

@ -26,6 +26,14 @@
* protected with a spinlock. * protected with a spinlock.
*/ */
struct bt_work {
struct list_head list;
struct rb_node node;
struct policy_work work;
};
extern struct kmem_cache *btracker_work_cache;
struct background_work; struct background_work;
struct background_tracker; struct background_tracker;

View File

@ -10,6 +10,7 @@
#include "dm-bio-record.h" #include "dm-bio-record.h"
#include "dm-cache-metadata.h" #include "dm-cache-metadata.h"
#include "dm-io-tracker.h" #include "dm-io-tracker.h"
#include "dm-cache-background-tracker.h"
#include <linux/dm-io.h> #include <linux/dm-io.h>
#include <linux/dm-kcopyd.h> #include <linux/dm-kcopyd.h>
@ -2263,7 +2264,7 @@ static int parse_cache_args(struct cache_args *ca, int argc, char **argv,
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
static struct kmem_cache *migration_cache; static struct kmem_cache *migration_cache = NULL;
#define NOT_CORE_OPTION 1 #define NOT_CORE_OPTION 1
@ -3445,22 +3446,36 @@ static int __init dm_cache_init(void)
int r; int r;
migration_cache = KMEM_CACHE(dm_cache_migration, 0); migration_cache = KMEM_CACHE(dm_cache_migration, 0);
if (!migration_cache) if (!migration_cache) {
return -ENOMEM; r = -ENOMEM;
goto err;
}
btracker_work_cache = kmem_cache_create("dm_cache_bt_work",
sizeof(struct bt_work), __alignof__(struct bt_work), 0, NULL);
if (!btracker_work_cache) {
r = -ENOMEM;
goto err;
}
r = dm_register_target(&cache_target); r = dm_register_target(&cache_target);
if (r) { if (r) {
kmem_cache_destroy(migration_cache); goto err;
return r;
} }
return 0; return 0;
err:
kmem_cache_destroy(migration_cache);
kmem_cache_destroy(btracker_work_cache);
return r;
} }
static void __exit dm_cache_exit(void) static void __exit dm_cache_exit(void)
{ {
dm_unregister_target(&cache_target); dm_unregister_target(&cache_target);
kmem_cache_destroy(migration_cache); kmem_cache_destroy(migration_cache);
kmem_cache_destroy(btracker_work_cache);
} }
module_init(dm_cache_init); module_init(dm_cache_init);

View File

@ -348,12 +348,12 @@ static int get_edid_tag_location(const u8 *edid, unsigned int size,
/* Return if not a CTA-861 extension block */ /* Return if not a CTA-861 extension block */
if (size < 256 || edid[0] != 0x02 || edid[1] != 0x03) if (size < 256 || edid[0] != 0x02 || edid[1] != 0x03)
return -1; return -ENOENT;
/* search tag */ /* search tag */
d = edid[0x02] & 0x7f; d = edid[0x02] & 0x7f;
if (d <= 4) if (d <= 4)
return -1; return -ENOENT;
i = 0x04; i = 0x04;
end = 0x00 + d; end = 0x00 + d;
@ -371,7 +371,7 @@ static int get_edid_tag_location(const u8 *edid, unsigned int size,
return offset + i; return offset + i;
i += len + 1; i += len + 1;
} while (i < end); } while (i < end);
return -1; return -ENOENT;
} }
static void extron_edid_crc(u8 *edid) static void extron_edid_crc(u8 *edid)

View File

@ -685,7 +685,7 @@ static int pulse8_setup(struct pulse8 *pulse8, struct serio *serio,
err = pulse8_send_and_wait(pulse8, cmd, 1, cmd[0], 4); err = pulse8_send_and_wait(pulse8, cmd, 1, cmd[0], 4);
if (err) if (err)
return err; return err;
date = (data[0] << 24) | (data[1] << 16) | (data[2] << 8) | data[3]; date = ((unsigned)data[0] << 24) | (data[1] << 16) | (data[2] << 8) | data[3];
dev_info(pulse8->dev, "Firmware build date %ptT\n", &date); dev_info(pulse8->dev, "Firmware build date %ptT\n", &date);
dev_dbg(pulse8->dev, "Persistent config:\n"); dev_dbg(pulse8->dev, "Persistent config:\n");

View File

@ -1795,6 +1795,9 @@ static void tpg_precalculate_line(struct tpg_data *tpg)
unsigned p; unsigned p;
unsigned x; unsigned x;
if (WARN_ON_ONCE(!tpg->src_width || !tpg->scaled_width))
return;
switch (tpg->pattern) { switch (tpg->pattern) {
case TPG_PAT_GREEN: case TPG_PAT_GREEN:
contrast = TPG_COLOR_100_RED; contrast = TPG_COLOR_100_RED;

View File

@ -1482,18 +1482,23 @@ static int __prepare_dmabuf(struct vb2_buffer *vb)
} }
vb->planes[plane].dbuf_mapped = 1; vb->planes[plane].dbuf_mapped = 1;
} }
} else {
for (plane = 0; plane < vb->num_planes; ++plane)
dma_buf_put(planes[plane].dbuf);
}
/* /*
* Now that everything is in order, copy relevant information * Now that everything is in order, copy relevant information
* provided by userspace. * provided by userspace.
*/ */
for (plane = 0; plane < vb->num_planes; ++plane) { for (plane = 0; plane < vb->num_planes; ++plane) {
vb->planes[plane].bytesused = planes[plane].bytesused; vb->planes[plane].bytesused = planes[plane].bytesused;
vb->planes[plane].length = planes[plane].length; vb->planes[plane].length = planes[plane].length;
vb->planes[plane].m.fd = planes[plane].m.fd; vb->planes[plane].m.fd = planes[plane].m.fd;
vb->planes[plane].data_offset = planes[plane].data_offset; vb->planes[plane].data_offset = planes[plane].data_offset;
} }
if (reacquired) {
/* /*
* Call driver-specific initialization on the newly acquired buffer, * Call driver-specific initialization on the newly acquired buffer,
* if provided. * if provided.
@ -1503,9 +1508,6 @@ static int __prepare_dmabuf(struct vb2_buffer *vb)
dprintk(q, 1, "buffer initialization failed\n"); dprintk(q, 1, "buffer initialization failed\n");
goto err_put_vb2_buf; goto err_put_vb2_buf;
} }
} else {
for (plane = 0; plane < vb->num_planes; ++plane)
dma_buf_put(planes[plane].dbuf);
} }
ret = call_vb_qop(vb, buf_prepare, vb); ret = call_vb_qop(vb, buf_prepare, vb);

View File

@ -443,8 +443,8 @@ static int dvb_frontend_swzigzag_autotune(struct dvb_frontend *fe, int check_wra
default: default:
fepriv->auto_step++; fepriv->auto_step++;
fepriv->auto_sub_step = -1; /* it'll be incremented to 0 in a moment */ fepriv->auto_sub_step = 0;
break; continue;
} }
if (!ready) fepriv->auto_sub_step++; if (!ready) fepriv->auto_sub_step++;

View File

@ -366,9 +366,15 @@ int dvb_vb2_querybuf(struct dvb_vb2_ctx *ctx, struct dmx_buffer *b)
int dvb_vb2_expbuf(struct dvb_vb2_ctx *ctx, struct dmx_exportbuffer *exp) int dvb_vb2_expbuf(struct dvb_vb2_ctx *ctx, struct dmx_exportbuffer *exp)
{ {
struct vb2_queue *q = &ctx->vb_q; struct vb2_queue *q = &ctx->vb_q;
struct vb2_buffer *vb2 = vb2_get_buffer(q, exp->index);
int ret; int ret;
ret = vb2_core_expbuf(&ctx->vb_q, &exp->fd, q->type, q->bufs[exp->index], if (!vb2) {
dprintk(1, "[%s] invalid buffer index\n", ctx->name);
return -EINVAL;
}
ret = vb2_core_expbuf(&ctx->vb_q, &exp->fd, q->type, vb2,
0, exp->flags); 0, exp->flags);
if (ret) { if (ret) {
dprintk(1, "[%s] index=%d errno=%d\n", ctx->name, dprintk(1, "[%s] index=%d errno=%d\n", ctx->name,

View File

@ -86,10 +86,15 @@ static DECLARE_RWSEM(minor_rwsem);
static int dvb_device_open(struct inode *inode, struct file *file) static int dvb_device_open(struct inode *inode, struct file *file)
{ {
struct dvb_device *dvbdev; struct dvb_device *dvbdev;
unsigned int minor = iminor(inode);
if (minor >= MAX_DVB_MINORS)
return -ENODEV;
mutex_lock(&dvbdev_mutex); mutex_lock(&dvbdev_mutex);
down_read(&minor_rwsem); down_read(&minor_rwsem);
dvbdev = dvb_minors[iminor(inode)];
dvbdev = dvb_minors[minor];
if (dvbdev && dvbdev->fops) { if (dvbdev && dvbdev->fops) {
int err = 0; int err = 0;
@ -525,7 +530,10 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
for (minor = 0; minor < MAX_DVB_MINORS; minor++) for (minor = 0; minor < MAX_DVB_MINORS; minor++)
if (!dvb_minors[minor]) if (!dvb_minors[minor])
break; break;
if (minor == MAX_DVB_MINORS) { #else
minor = nums2minor(adap->num, type, id);
#endif
if (minor >= MAX_DVB_MINORS) {
if (new_node) { if (new_node) {
list_del(&new_node->list_head); list_del(&new_node->list_head);
kfree(dvbdevfops); kfree(dvbdevfops);
@ -538,9 +546,7 @@ int dvb_register_device(struct dvb_adapter *adap, struct dvb_device **pdvbdev,
mutex_unlock(&dvbdev_register_lock); mutex_unlock(&dvbdev_register_lock);
return -EINVAL; return -EINVAL;
} }
#else
minor = nums2minor(adap->num, type, id);
#endif
dvbdev->minor = minor; dvbdev->minor = minor;
dvb_minors[minor] = dvb_device_get(dvbdev); dvb_minors[minor] = dvb_device_get(dvbdev);
up_write(&minor_rwsem); up_write(&minor_rwsem);

View File

@ -741,6 +741,7 @@ static int cx24116_read_snr_pct(struct dvb_frontend *fe, u16 *snr)
{ {
struct cx24116_state *state = fe->demodulator_priv; struct cx24116_state *state = fe->demodulator_priv;
u8 snr_reading; u8 snr_reading;
int ret;
static const u32 snr_tab[] = { /* 10 x Table (rounded up) */ static const u32 snr_tab[] = { /* 10 x Table (rounded up) */
0x00000, 0x0199A, 0x03333, 0x04ccD, 0x06667, 0x00000, 0x0199A, 0x03333, 0x04ccD, 0x06667,
0x08000, 0x0999A, 0x0b333, 0x0cccD, 0x0e667, 0x08000, 0x0999A, 0x0b333, 0x0cccD, 0x0e667,
@ -749,7 +750,11 @@ static int cx24116_read_snr_pct(struct dvb_frontend *fe, u16 *snr)
dprintk("%s()\n", __func__); dprintk("%s()\n", __func__);
snr_reading = cx24116_readreg(state, CX24116_REG_QUALITY0); ret = cx24116_readreg(state, CX24116_REG_QUALITY0);
if (ret < 0)
return ret;
snr_reading = ret;
if (snr_reading >= 0xa0 /* 100% */) if (snr_reading >= 0xa0 /* 100% */)
*snr = 0xffff; *snr = 0xffff;

View File

@ -269,7 +269,7 @@ static enum stb0899_status stb0899_search_carrier(struct stb0899_state *state)
short int derot_freq = 0, last_derot_freq = 0, derot_limit, next_loop = 3; short int derot_freq = 0, last_derot_freq = 0, derot_limit, next_loop = 3;
int index = 0; int index = 0;
u8 cfr[2]; u8 cfr[2] = {0};
u8 reg; u8 reg;
internal->status = NOCARRIER; internal->status = NOCARRIER;

View File

@ -2519,10 +2519,10 @@ static int adv76xx_log_status(struct v4l2_subdev *sd)
const struct adv76xx_chip_info *info = state->info; const struct adv76xx_chip_info *info = state->info;
struct v4l2_dv_timings timings; struct v4l2_dv_timings timings;
struct stdi_readback stdi; struct stdi_readback stdi;
u8 reg_io_0x02 = io_read(sd, 0x02); int ret;
u8 reg_io_0x02;
u8 edid_enabled; u8 edid_enabled;
u8 cable_det; u8 cable_det;
static const char * const csc_coeff_sel_rb[16] = { static const char * const csc_coeff_sel_rb[16] = {
"bypassed", "YPbPr601 -> RGB", "reserved", "YPbPr709 -> RGB", "bypassed", "YPbPr601 -> RGB", "reserved", "YPbPr709 -> RGB",
"reserved", "RGB -> YPbPr601", "reserved", "RGB -> YPbPr709", "reserved", "RGB -> YPbPr601", "reserved", "RGB -> YPbPr709",
@ -2621,13 +2621,21 @@ static int adv76xx_log_status(struct v4l2_subdev *sd)
v4l2_info(sd, "-----Color space-----\n"); v4l2_info(sd, "-----Color space-----\n");
v4l2_info(sd, "RGB quantization range ctrl: %s\n", v4l2_info(sd, "RGB quantization range ctrl: %s\n",
rgb_quantization_range_txt[state->rgb_quantization_range]); rgb_quantization_range_txt[state->rgb_quantization_range]);
v4l2_info(sd, "Input color space: %s\n",
input_color_space_txt[reg_io_0x02 >> 4]); ret = io_read(sd, 0x02);
v4l2_info(sd, "Output color space: %s %s, alt-gamma %s\n", if (ret < 0) {
(reg_io_0x02 & 0x02) ? "RGB" : "YCbCr", v4l2_info(sd, "Can't read Input/Output color space\n");
(((reg_io_0x02 >> 2) & 0x01) ^ (reg_io_0x02 & 0x01)) ? } else {
"(16-235)" : "(0-255)", reg_io_0x02 = ret;
(reg_io_0x02 & 0x08) ? "enabled" : "disabled");
v4l2_info(sd, "Input color space: %s\n",
input_color_space_txt[reg_io_0x02 >> 4]);
v4l2_info(sd, "Output color space: %s %s, alt-gamma %s\n",
(reg_io_0x02 & 0x02) ? "RGB" : "YCbCr",
(((reg_io_0x02 >> 2) & 0x01) ^ (reg_io_0x02 & 0x01)) ?
"(16-235)" : "(0-255)",
(reg_io_0x02 & 0x08) ? "enabled" : "disabled");
}
v4l2_info(sd, "Color space conversion: %s\n", v4l2_info(sd, "Color space conversion: %s\n",
csc_coeff_sel_rb[cp_read(sd, info->cp_csc) >> 4]); csc_coeff_sel_rb[cp_read(sd, info->cp_csc) >> 4]);

View File

@ -255,10 +255,10 @@ static u32 calc_pll(struct ar0521_dev *sensor, u32 freq, u16 *pre_ptr, u16 *mult
continue; /* Minimum value */ continue; /* Minimum value */
if (new_mult > 254) if (new_mult > 254)
break; /* Maximum, larger pre won't work either */ break; /* Maximum, larger pre won't work either */
if (sensor->extclk_freq * (u64)new_mult < AR0521_PLL_MIN * if (sensor->extclk_freq * (u64)new_mult < (u64)AR0521_PLL_MIN *
new_pre) new_pre)
continue; continue;
if (sensor->extclk_freq * (u64)new_mult > AR0521_PLL_MAX * if (sensor->extclk_freq * (u64)new_mult > (u64)AR0521_PLL_MAX *
new_pre) new_pre)
break; /* Larger pre won't work either */ break; /* Larger pre won't work either */
new_pll = div64_round_up(sensor->extclk_freq * (u64)new_mult, new_pll = div64_round_up(sensor->extclk_freq * (u64)new_mult,

View File

@ -227,6 +227,8 @@ void mgb4_cmt_set_vin_freq_range(struct mgb4_vin_dev *vindev,
u32 config; u32 config;
size_t i; size_t i;
freq_range = array_index_nospec(freq_range, ARRAY_SIZE(cmt_vals_in));
addr = cmt_addrs_in[vindev->config->id]; addr = cmt_addrs_in[vindev->config->id];
reg_set = cmt_vals_in[freq_range]; reg_set = cmt_vals_in[freq_range];

View File

@ -775,11 +775,14 @@ static void exynos4_jpeg_parse_decode_h_tbl(struct s5p_jpeg_ctx *ctx)
(unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) + ctx->out_q.sos + 2; (unsigned long)vb2_plane_vaddr(&vb->vb2_buf, 0) + ctx->out_q.sos + 2;
jpeg_buffer.curr = 0; jpeg_buffer.curr = 0;
word = 0;
if (get_word_be(&jpeg_buffer, &word)) if (get_word_be(&jpeg_buffer, &word))
return; return;
jpeg_buffer.size = (long)word - 2;
if (word < 2)
jpeg_buffer.size = 0;
else
jpeg_buffer.size = (long)word - 2;
jpeg_buffer.data += 2; jpeg_buffer.data += 2;
jpeg_buffer.curr = 0; jpeg_buffer.curr = 0;
@ -1058,6 +1061,7 @@ static int get_word_be(struct s5p_jpeg_buffer *buf, unsigned int *word)
if (byte == -1) if (byte == -1)
return -1; return -1;
*word = (unsigned int)byte | temp; *word = (unsigned int)byte | temp;
return 0; return 0;
} }
@ -1145,7 +1149,7 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
if (get_word_be(&jpeg_buffer, &word)) if (get_word_be(&jpeg_buffer, &word))
break; break;
length = (long)word - 2; length = (long)word - 2;
if (!length) if (length <= 0)
return false; return false;
sof = jpeg_buffer.curr; /* after 0xffc0 */ sof = jpeg_buffer.curr; /* after 0xffc0 */
sof_len = length; sof_len = length;
@ -1176,7 +1180,7 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
if (get_word_be(&jpeg_buffer, &word)) if (get_word_be(&jpeg_buffer, &word))
break; break;
length = (long)word - 2; length = (long)word - 2;
if (!length) if (length <= 0)
return false; return false;
if (n_dqt >= S5P_JPEG_MAX_MARKER) if (n_dqt >= S5P_JPEG_MAX_MARKER)
return false; return false;
@ -1189,7 +1193,7 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
if (get_word_be(&jpeg_buffer, &word)) if (get_word_be(&jpeg_buffer, &word))
break; break;
length = (long)word - 2; length = (long)word - 2;
if (!length) if (length <= 0)
return false; return false;
if (n_dht >= S5P_JPEG_MAX_MARKER) if (n_dht >= S5P_JPEG_MAX_MARKER)
return false; return false;
@ -1214,6 +1218,7 @@ static bool s5p_jpeg_parse_hdr(struct s5p_jpeg_q_data *result,
if (get_word_be(&jpeg_buffer, &word)) if (get_word_be(&jpeg_buffer, &word))
break; break;
length = (long)word - 2; length = (long)word - 2;
/* No need to check underflows as skip() does it */
skip(&jpeg_buffer, length); skip(&jpeg_buffer, length);
break; break;
} }

View File

@ -910,7 +910,7 @@ static int vivid_create_queue(struct vivid_dev *dev,
* videobuf2-core.c to MAX_BUFFER_INDEX. * videobuf2-core.c to MAX_BUFFER_INDEX.
*/ */
if (buf_type == V4L2_BUF_TYPE_VIDEO_CAPTURE) if (buf_type == V4L2_BUF_TYPE_VIDEO_CAPTURE)
q->max_num_buffers = 64; q->max_num_buffers = MAX_VID_CAP_BUFFERS;
if (buf_type == V4L2_BUF_TYPE_SDR_CAPTURE) if (buf_type == V4L2_BUF_TYPE_SDR_CAPTURE)
q->max_num_buffers = 1024; q->max_num_buffers = 1024;
if (buf_type == V4L2_BUF_TYPE_VBI_CAPTURE) if (buf_type == V4L2_BUF_TYPE_VBI_CAPTURE)

Some files were not shown because too many files have changed in this diff Show More