Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

No conflicts.

Adjacent changes:

drivers/net/ethernet/sfc/tc.c
  622ab65634 ("sfc: fix error unwinds in TC offload")
  b6583d5e9e ("sfc: support TC decap rules matching on enc_src_port")

net/mptcp/protocol.c
  5b825727d0 ("mptcp: add annotations around msk->subflow accesses")
  e76c8ef5cc ("mptcp: refactor mptcp_stream_accept()")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2023-06-01 15:33:03 -07:00
commit a03a91bd68
429 changed files with 2479 additions and 1380 deletions

View File

@ -5,5 +5,5 @@ Changes
See https://wiki.samba.org/index.php/LinuxCIFSKernel for summary
information about fixes/improvements to CIFS/SMB2/SMB3 support (changes
to cifs.ko module) by kernel version (and cifs internal module version).
This may be easier to read than parsing the output of "git log fs/cifs"
by release.
This may be easier to read than parsing the output of
"git log fs/smb/client" by release.

View File

@ -45,7 +45,7 @@ Installation instructions
If you have built the CIFS vfs as module (successfully) simply
type ``make modules_install`` (or if you prefer, manually copy the file to
the modules directory e.g. /lib/modules/2.4.10-4GB/kernel/fs/cifs/cifs.ko).
the modules directory e.g. /lib/modules/6.3.0-060300-generic/kernel/fs/smb/client/cifs.ko).
If you have built the CIFS vfs into the kernel itself, follow the instructions
for your distribution on how to install a new kernel (usually you
@ -66,15 +66,15 @@ If cifs is built as a module, then the size and number of network buffers
and maximum number of simultaneous requests to one server can be configured.
Changing these from their defaults is not recommended. By executing modinfo::
modinfo kernel/fs/cifs/cifs.ko
modinfo <path to cifs.ko>
on kernel/fs/cifs/cifs.ko the list of configuration changes that can be made
on kernel/fs/smb/client/cifs.ko the list of configuration changes that can be made
at module initialization time (by running insmod cifs.ko) can be seen.
Recommendations
===============
To improve security the SMB2.1 dialect or later (usually will get SMB3) is now
To improve security the SMB2.1 dialect or later (usually will get SMB3.1.1) is now
the new default. To use old dialects (e.g. to mount Windows XP) use "vers=1.0"
on mount (or vers=2.0 for Windows Vista). Note that the CIFS (vers=1.0) is
much older and less secure than the default dialect SMB3 which includes

View File

@ -166,6 +166,12 @@ properties:
resets:
maxItems: 1
mediatek,broken-save-restore-fw:
type: boolean
description:
Asserts that the firmware on this device has issues saving and restoring
GICR registers when the GIC redistributors are powered off.
dependencies:
mbi-ranges: [ msi-controller ]
msi-controller: [ mbi-ranges ]

View File

@ -64,7 +64,7 @@ properties:
description:
size of memory intended as internal memory for endpoints
buffers expressed in KB
$ref: /schemas/types.yaml#/definitions/uint32
$ref: /schemas/types.yaml#/definitions/uint16
cdns,phyrst-a-enable:
description: Enable resetting of PHY if Rx fail is detected

View File

@ -72,7 +72,6 @@ Documentation for filesystem implementations.
befs
bfs
btrfs
cifs/index
ceph
coda
configfs
@ -111,6 +110,7 @@ Documentation for filesystem implementations.
ramfs-rootfs-initramfs
relay
romfs
smb/index
spufs/index
squashfs
sysfs

View File

@ -59,7 +59,7 @@ the root file system via SMB protocol.
Enables the kernel to mount the root file system via SMB that are
located in the <server-ip> and <share> specified in this option.
The default mount options are set in fs/cifs/cifsroot.c.
The default mount options are set in fs/smb/client/cifsroot.c.
server-ip
IPv4 address of the server.

View File

@ -60,22 +60,6 @@ attribute-sets:
type: nest
nested-attributes: bitset-bits
-
name: u64-array
attributes:
-
name: u64
type: nest
multi-attr: true
nested-attributes: u64
-
name: s32-array
attributes:
-
name: s32
type: nest
multi-attr: true
nested-attributes: s32
-
name: string
attributes:
@ -705,16 +689,16 @@ attribute-sets:
type: u8
-
name: corrected
type: nest
nested-attributes: u64-array
type: binary
sub-type: u64
-
name: uncorr
type: nest
nested-attributes: u64-array
type: binary
sub-type: u64
-
name: corr-bits
type: nest
nested-attributes: u64-array
type: binary
sub-type: u64
-
name: fec
attributes:
@ -827,8 +811,8 @@ attribute-sets:
type: u32
-
name: index
type: nest
nested-attributes: s32-array
type: binary
sub-type: s32
-
name: module
attributes:

View File

@ -40,6 +40,7 @@ flow_steering_mode: Device flow steering mode
---------------------------------------------
The flow steering mode parameter controls the flow steering mode of the driver.
Two modes are supported:
1. 'dmfs' - Device managed flow steering.
2. 'smfs' - Software/Driver managed flow steering.
@ -99,6 +100,7 @@ between representors and stacked devices.
By default metadata is enabled on the supported devices in E-switch.
Metadata is applicable only for E-switch in switchdev mode and
users may disable it when NONE of the below use cases will be in use:
1. HCA is in Dual/multi-port RoCE mode.
2. VF/SF representor bonding (Usually used for Live migration)
3. Stacked devices
@ -180,7 +182,8 @@ User commands examples:
$ devlink health diagnose pci/0000:82:00.0 reporter tx
NOTE: This command has valid output only when interface is up, otherwise the command has empty output.
.. note::
This command has valid output only when interface is up, otherwise the command has empty output.
- Show number of tx errors indicated, number of recover flows ended successfully,
is autorecover enabled and graceful period from last recover::
@ -232,8 +235,9 @@ User commands examples:
$ devlink health dump show pci/0000:82:00.0 reporter fw
NOTE: This command can run only on the PF which has fw tracer ownership,
running it on other PF or any VF will return "Operation not permitted".
.. note::
This command can run only on the PF which has fw tracer ownership,
running it on other PF or any VF will return "Operation not permitted".
fw fatal reporter
-----------------
@ -256,7 +260,8 @@ User commands examples:
$ devlink health dump show pci/0000:82:00.1 reporter fw_fatal
NOTE: This command can run only on PF.
.. note::
This command can run only on PF.
vnic reporter
-------------
@ -265,28 +270,37 @@ It is responsible for querying the vnic diagnostic counters from fw and displayi
them in realtime.
Description of the vnic counters:
total_q_under_processor_handle: number of queues in an error state due to
an async error or errored command.
send_queue_priority_update_flow: number of QP/SQ priority/SL update
events.
cq_overrun: number of times CQ entered an error state due to an
overflow.
async_eq_overrun: number of times an EQ mapped to async events was
overrun.
comp_eq_overrun: number of times an EQ mapped to completion events was
overrun.
quota_exceeded_command: number of commands issued and failed due to quota
exceeded.
invalid_command: number of commands issued and failed dues to any reason
other than quota exceeded.
nic_receive_steering_discard: number of packets that completed RX flow
steering but were discarded due to a mismatch in flow table.
- total_q_under_processor_handle
number of queues in an error state due to
an async error or errored command.
- send_queue_priority_update_flow
number of QP/SQ priority/SL update events.
- cq_overrun
number of times CQ entered an error state due to an overflow.
- async_eq_overrun
number of times an EQ mapped to async events was overrun.
comp_eq_overrun number of times an EQ mapped to completion events was
overrun.
- quota_exceeded_command
number of commands issued and failed due to quota exceeded.
- invalid_command
number of commands issued and failed dues to any reason other than quota
exceeded.
- nic_receive_steering_discard
number of packets that completed RX flow
steering but were discarded due to a mismatch in flow table.
User commands examples:
- Diagnose PF/VF vnic counters
- Diagnose PF/VF vnic counters::
$ devlink health diagnose pci/0000:82:00.1 reporter vnic
- Diagnose representor vnic counters (performed by supplying devlink port of the
representor, which can be obtained via devlink port command)
representor, which can be obtained via devlink port command)::
$ devlink health diagnose pci/0000:82:00.1/65537 reporter vnic
NOTE: This command can run over all interfaces such as PF/VF and representor ports.
.. note::
This command can run over all interfaces such as PF/VF and representor ports.

View File

@ -35,7 +35,7 @@ Documentation written by Tom Zanussi
in place of an explicit value field - this is simply a count of
event hits. If 'values' isn't specified, an implicit 'hitcount'
value will be automatically created and used as the only value.
Keys can be any field, or the special string 'stacktrace', which
Keys can be any field, or the special string 'common_stacktrace', which
will use the event's kernel stacktrace as the key. The keywords
'keys' or 'key' can be used to specify keys, and the keywords
'values', 'vals', or 'val' can be used to specify values. Compound
@ -54,7 +54,7 @@ Documentation written by Tom Zanussi
'compatible' if the fields named in the trigger share the same
number and type of fields and those fields also have the same names.
Note that any two events always share the compatible 'hitcount' and
'stacktrace' fields and can therefore be combined using those
'common_stacktrace' fields and can therefore be combined using those
fields, however pointless that may be.
'hist' triggers add a 'hist' file to each event's subdirectory.
@ -547,9 +547,9 @@ Extended error information
the hist trigger display symbolic call_sites, we can have the hist
trigger additionally display the complete set of kernel stack traces
that led to each call_site. To do that, we simply use the special
value 'stacktrace' for the key parameter::
value 'common_stacktrace' for the key parameter::
# echo 'hist:keys=stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \
# echo 'hist:keys=common_stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \
/sys/kernel/tracing/events/kmem/kmalloc/trigger
The above trigger will use the kernel stack trace in effect when an
@ -561,9 +561,9 @@ Extended error information
every callpath to a kmalloc for a kernel compile)::
# cat /sys/kernel/tracing/events/kmem/kmalloc/hist
# trigger info: hist:keys=stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active]
# trigger info: hist:keys=common_stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active]
{ stacktrace:
{ common_stacktrace:
__kmalloc_track_caller+0x10b/0x1a0
kmemdup+0x20/0x50
hidraw_report_event+0x8a/0x120 [hid]
@ -581,7 +581,7 @@ Extended error information
cpu_startup_entry+0x315/0x3e0
rest_init+0x7c/0x80
} hitcount: 3 bytes_req: 21 bytes_alloc: 24
{ stacktrace:
{ common_stacktrace:
__kmalloc_track_caller+0x10b/0x1a0
kmemdup+0x20/0x50
hidraw_report_event+0x8a/0x120 [hid]
@ -596,7 +596,7 @@ Extended error information
do_IRQ+0x5a/0xf0
ret_from_intr+0x0/0x30
} hitcount: 3 bytes_req: 21 bytes_alloc: 24
{ stacktrace:
{ common_stacktrace:
kmem_cache_alloc_trace+0xeb/0x150
aa_alloc_task_context+0x27/0x40
apparmor_cred_prepare+0x1f/0x50
@ -608,7 +608,7 @@ Extended error information
.
.
.
{ stacktrace:
{ common_stacktrace:
__kmalloc+0x11b/0x1b0
i915_gem_execbuffer2+0x6c/0x2c0 [i915]
drm_ioctl+0x349/0x670 [drm]
@ -616,7 +616,7 @@ Extended error information
SyS_ioctl+0x81/0xa0
system_call_fastpath+0x12/0x6a
} hitcount: 17726 bytes_req: 13944120 bytes_alloc: 19593808
{ stacktrace:
{ common_stacktrace:
__kmalloc+0x11b/0x1b0
load_elf_phdrs+0x76/0xa0
load_elf_binary+0x102/0x1650
@ -625,7 +625,7 @@ Extended error information
SyS_execve+0x3a/0x50
return_from_execve+0x0/0x23
} hitcount: 33348 bytes_req: 17152128 bytes_alloc: 20226048
{ stacktrace:
{ common_stacktrace:
kmem_cache_alloc_trace+0xeb/0x150
apparmor_file_alloc_security+0x27/0x40
security_file_alloc+0x16/0x20
@ -636,7 +636,7 @@ Extended error information
SyS_open+0x1e/0x20
system_call_fastpath+0x12/0x6a
} hitcount: 4766422 bytes_req: 9532844 bytes_alloc: 38131376
{ stacktrace:
{ common_stacktrace:
__kmalloc+0x11b/0x1b0
seq_buf_alloc+0x1b/0x50
seq_read+0x2cc/0x370
@ -1026,7 +1026,7 @@ Extended error information
First we set up an initially paused stacktrace trigger on the
netif_receive_skb event::
# echo 'hist:key=stacktrace:vals=len:pause' > \
# echo 'hist:key=common_stacktrace:vals=len:pause' > \
/sys/kernel/tracing/events/net/netif_receive_skb/trigger
Next, we set up an 'enable_hist' trigger on the sched_process_exec
@ -1060,9 +1060,9 @@ Extended error information
$ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
# cat /sys/kernel/tracing/events/net/netif_receive_skb/hist
# trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]
# trigger info: hist:keys=common_stacktrace:vals=len:sort=hitcount:size=2048 [paused]
{ stacktrace:
{ common_stacktrace:
__netif_receive_skb_core+0x46d/0x990
__netif_receive_skb+0x18/0x60
netif_receive_skb_internal+0x23/0x90
@ -1079,7 +1079,7 @@ Extended error information
kthread+0xd2/0xf0
ret_from_fork+0x42/0x70
} hitcount: 85 len: 28884
{ stacktrace:
{ common_stacktrace:
__netif_receive_skb_core+0x46d/0x990
__netif_receive_skb+0x18/0x60
netif_receive_skb_internal+0x23/0x90
@ -1097,7 +1097,7 @@ Extended error information
irq_thread+0x11f/0x150
kthread+0xd2/0xf0
} hitcount: 98 len: 664329
{ stacktrace:
{ common_stacktrace:
__netif_receive_skb_core+0x46d/0x990
__netif_receive_skb+0x18/0x60
process_backlog+0xa8/0x150
@ -1115,7 +1115,7 @@ Extended error information
inet_sendmsg+0x64/0xa0
sock_sendmsg+0x3d/0x50
} hitcount: 115 len: 13030
{ stacktrace:
{ common_stacktrace:
__netif_receive_skb_core+0x46d/0x990
__netif_receive_skb+0x18/0x60
netif_receive_skb_internal+0x23/0x90
@ -1142,14 +1142,14 @@ Extended error information
into the histogram. In order to avoid having to set everything up
again, we can just clear the histogram first::
# echo 'hist:key=stacktrace:vals=len:clear' >> \
# echo 'hist:key=common_stacktrace:vals=len:clear' >> \
/sys/kernel/tracing/events/net/netif_receive_skb/trigger
Just to verify that it is in fact cleared, here's what we now see in
the hist file::
# cat /sys/kernel/tracing/events/net/netif_receive_skb/hist
# trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]
# trigger info: hist:keys=common_stacktrace:vals=len:sort=hitcount:size=2048 [paused]
Totals:
Hits: 0
@ -1485,12 +1485,12 @@ Extended error information
And here's an example that shows how to combine histogram data from
any two events even if they don't share any 'compatible' fields
other than 'hitcount' and 'stacktrace'. These commands create a
other than 'hitcount' and 'common_stacktrace'. These commands create a
couple of triggers named 'bar' using those fields::
# echo 'hist:name=bar:key=stacktrace:val=hitcount' > \
# echo 'hist:name=bar:key=common_stacktrace:val=hitcount' > \
/sys/kernel/tracing/events/sched/sched_process_fork/trigger
# echo 'hist:name=bar:key=stacktrace:val=hitcount' > \
# echo 'hist:name=bar:key=common_stacktrace:val=hitcount' > \
/sys/kernel/tracing/events/net/netif_rx/trigger
And displaying the output of either shows some interesting if
@ -1501,16 +1501,16 @@ Extended error information
# event histogram
#
# trigger info: hist:name=bar:keys=stacktrace:vals=hitcount:sort=hitcount:size=2048 [active]
# trigger info: hist:name=bar:keys=common_stacktrace:vals=hitcount:sort=hitcount:size=2048 [active]
#
{ stacktrace:
{ common_stacktrace:
kernel_clone+0x18e/0x330
kernel_thread+0x29/0x30
kthreadd+0x154/0x1b0
ret_from_fork+0x3f/0x70
} hitcount: 1
{ stacktrace:
{ common_stacktrace:
netif_rx_internal+0xb2/0xd0
netif_rx_ni+0x20/0x70
dev_loopback_xmit+0xaa/0xd0
@ -1528,7 +1528,7 @@ Extended error information
call_cpuidle+0x3b/0x60
cpu_startup_entry+0x22d/0x310
} hitcount: 1
{ stacktrace:
{ common_stacktrace:
netif_rx_internal+0xb2/0xd0
netif_rx_ni+0x20/0x70
dev_loopback_xmit+0xaa/0xd0
@ -1543,7 +1543,7 @@ Extended error information
SyS_sendto+0xe/0x10
entry_SYSCALL_64_fastpath+0x12/0x6a
} hitcount: 2
{ stacktrace:
{ common_stacktrace:
netif_rx_internal+0xb2/0xd0
netif_rx+0x1c/0x60
loopback_xmit+0x6c/0xb0
@ -1561,7 +1561,7 @@ Extended error information
sock_sendmsg+0x38/0x50
___sys_sendmsg+0x14e/0x270
} hitcount: 76
{ stacktrace:
{ common_stacktrace:
netif_rx_internal+0xb2/0xd0
netif_rx+0x1c/0x60
loopback_xmit+0x6c/0xb0
@ -1579,7 +1579,7 @@ Extended error information
sock_sendmsg+0x38/0x50
___sys_sendmsg+0x269/0x270
} hitcount: 77
{ stacktrace:
{ common_stacktrace:
netif_rx_internal+0xb2/0xd0
netif_rx+0x1c/0x60
loopback_xmit+0x6c/0xb0
@ -1597,7 +1597,7 @@ Extended error information
sock_sendmsg+0x38/0x50
SYSC_sendto+0xef/0x170
} hitcount: 88
{ stacktrace:
{ common_stacktrace:
kernel_clone+0x18e/0x330
SyS_clone+0x19/0x20
entry_SYSCALL_64_fastpath+0x12/0x6a
@ -1949,7 +1949,7 @@ uninterruptible state::
# cd /sys/kernel/tracing
# echo 's:block_lat pid_t pid; u64 delta; unsigned long[] stack;' > dynamic_events
# echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=stacktrace if prev_state == 2' >> events/sched/sched_switch/trigger
# echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=common_stacktrace if prev_state == 2' >> events/sched/sched_switch/trigger
# echo 'hist:keys=prev_pid:delta=common_timestamp.usecs-$ts,s=$st:onmax($delta).trace(block_lat,prev_pid,$delta,$s)' >> events/sched/sched_switch/trigger
# echo 1 > events/synthetic/block_lat/enable
# cat trace

View File

@ -363,7 +363,7 @@ Code Seq# Include File Comments
0xCC 00-0F drivers/misc/ibmvmc.h pseries VMC driver
0xCD 01 linux/reiserfs_fs.h
0xCE 01-02 uapi/linux/cxl_mem.h Compute Express Link Memory Devices
0xCF 02 fs/cifs/ioctl.c
0xCF 02 fs/smb/client/cifs_ioctl.h
0xDB 00-0F drivers/char/mwave/mwavepub.h
0xDD 00-3F ZFCP device driver see drivers/s390/scsi/
<mailto:aherrman@de.ibm.com>

View File

@ -956,7 +956,8 @@ F: Documentation/networking/device_drivers/ethernet/amazon/ena.rst
F: drivers/net/ethernet/amazon/
AMAZON RDMA EFA DRIVER
M: Gal Pressman <galpress@amazon.com>
M: Michael Margolin <mrgolin@amazon.com>
R: Gal Pressman <gal.pressman@linux.dev>
R: Yossi Leybovich <sleybo@amazon.com>
L: linux-rdma@vger.kernel.org
S: Supported
@ -2429,6 +2430,15 @@ X: drivers/net/wireless/atmel/
N: at91
N: atmel
ARM/MICROCHIP (ARM64) SoC support
M: Conor Dooley <conor@kernel.org>
M: Nicolas Ferre <nicolas.ferre@microchip.com>
M: Claudiu Beznea <claudiu.beznea@microchip.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported
T: git https://git.kernel.org/pub/scm/linux/kernel/git/at91/linux.git
F: arch/arm64/boot/dts/microchip/
ARM/Microchip Sparx5 SoC support
M: Lars Povlsen <lars.povlsen@microchip.com>
M: Steen Hegelund <Steen.Hegelund@microchip.com>
@ -2436,8 +2446,7 @@ M: Daniel Machon <daniel.machon@microchip.com>
M: UNGLinuxDriver@microchip.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Supported
T: git git://github.com/microchip-ung/linux-upstream.git
F: arch/arm64/boot/dts/microchip/
F: arch/arm64/boot/dts/microchip/sparx*
F: drivers/net/ethernet/microchip/vcap/
F: drivers/pinctrl/pinctrl-microchip-sgpio.c
N: sparx5
@ -3536,7 +3545,7 @@ F: Documentation/filesystems/befs.rst
F: fs/befs/
BFQ I/O SCHEDULER
M: Paolo Valente <paolo.valente@linaro.org>
M: Paolo Valente <paolo.valente@unimore.it>
M: Jens Axboe <axboe@kernel.dk>
L: linux-block@vger.kernel.org
S: Maintained
@ -5130,7 +5139,7 @@ X: drivers/clk/clkdev.c
COMMON INTERNET FILE SYSTEM CLIENT (CIFS and SMB3)
M: Steve French <sfrench@samba.org>
R: Paulo Alcantara <pc@cjr.nz> (DFS, global name space)
R: Paulo Alcantara <pc@manguebit.com> (DFS, global name space)
R: Ronnie Sahlberg <lsahlber@redhat.com> (directory leases, sparse files)
R: Shyam Prasad N <sprasad@microsoft.com> (multichannel)
R: Tom Talpey <tom@talpey.com> (RDMA, smbdirect)
@ -5140,8 +5149,8 @@ S: Supported
W: https://wiki.samba.org/index.php/LinuxCIFS
T: git git://git.samba.org/sfrench/cifs-2.6.git
F: Documentation/admin-guide/cifs/
F: fs/cifs/
F: fs/smbfs_common/
F: fs/smb/client/
F: fs/smb/common/
F: include/uapi/linux/cifs
COMPACTPCI HOTPLUG CORE
@ -9341,7 +9350,7 @@ F: include/linux/hisi_acc_qm.h
HISILICON ROCE DRIVER
M: Haoyue Xu <xuhaoyue1@hisilicon.com>
M: Wenpeng Liang <liangwenpeng@huawei.com>
M: Junxian Huang <huangjunxian6@hisilicon.com>
L: linux-rdma@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/infiniband/hisilicon-hns-roce.txt
@ -11306,9 +11315,9 @@ R: Tom Talpey <tom@talpey.com>
L: linux-cifs@vger.kernel.org
S: Maintained
T: git git://git.samba.org/ksmbd.git
F: Documentation/filesystems/cifs/ksmbd.rst
F: fs/ksmbd/
F: fs/smbfs_common/
F: Documentation/filesystems/smb/ksmbd.rst
F: fs/smb/common/
F: fs/smb/server/
KERNEL UNIT TESTING FRAMEWORK (KUnit)
M: Brendan Higgins <brendanhiggins@google.com>
@ -14932,6 +14941,7 @@ F: drivers/ntb/hw/intel/
NTFS FILESYSTEM
M: Anton Altaparmakov <anton@tuxera.com>
R: Namjae Jeon <linkinjeon@kernel.org>
L: linux-ntfs-dev@lists.sourceforge.net
S: Supported
W: http://www.tuxera.com/

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 4
SUBLEVEL = 0
EXTRAVERSION = -rc3
EXTRAVERSION = -rc4
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*

View File

@ -209,6 +209,7 @@ &pcie {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pcie>;
reset-gpio = <&gpio6 7 GPIO_ACTIVE_LOW>;
vpcie-supply = <&reg_pcie>;
status = "okay";
};

View File

@ -8,6 +8,7 @@
#include <dt-bindings/input/input.h>
#include <dt-bindings/leds/common.h>
#include <dt-bindings/pwm/pwm.h>
#include <dt-bindings/regulator/dlg,da9063-regulator.h>
#include "imx6ull.dtsi"
/ {
@ -84,16 +85,20 @@ onkey {
regulators {
vdd_soc_in_1v4: buck1 {
regulator-allowed-modes = <DA9063_BUCK_MODE_SLEEP>; /* PFM */
regulator-always-on;
regulator-boot-on;
regulator-initial-mode = <DA9063_BUCK_MODE_SLEEP>;
regulator-max-microvolt = <1400000>;
regulator-min-microvolt = <1400000>;
regulator-name = "vdd_soc_in_1v4";
};
vcc_3v3: buck2 {
regulator-allowed-modes = <DA9063_BUCK_MODE_SYNC>; /* PWM */
regulator-always-on;
regulator-boot-on;
regulator-initial-mode = <DA9063_BUCK_MODE_SYNC>;
regulator-max-microvolt = <3300000>;
regulator-min-microvolt = <3300000>;
regulator-name = "vcc_3v3";
@ -106,8 +111,10 @@ vcc_3v3: buck2 {
* the voltage is set to 1.5V.
*/
vcc_ddr_1v35: buck3 {
regulator-allowed-modes = <DA9063_BUCK_MODE_SYNC>; /* PWM */
regulator-always-on;
regulator-boot-on;
regulator-initial-mode = <DA9063_BUCK_MODE_SYNC>;
regulator-max-microvolt = <1500000>;
regulator-min-microvolt = <1500000>;
regulator-name = "vcc_ddr_1v35";

View File

@ -132,6 +132,7 @@ L2: cache-controller@2c0f0000 {
reg = <0x2c0f0000 0x1000>;
interrupts = <0 84 4>;
cache-level = <2>;
cache-unified;
};
pmu {

View File

@ -59,6 +59,7 @@ cpu3: cpu@3 {
L2_0: l2-cache0 {
compatible = "cache";
cache-level = <2>;
cache-unified;
};
};

View File

@ -72,6 +72,7 @@ cpu@3 {
L2_0: l2-cache0 {
compatible = "cache";
cache-level = <2>;
cache-unified;
};
};

View File

@ -58,6 +58,7 @@ cpu@1 {
L2_0: l2-cache0 {
compatible = "cache";
cache-level = <2>;
cache-unified;
};
};

View File

@ -171,6 +171,7 @@ usbotg3_cdns3: usb@5b120000 {
interrupt-names = "host", "peripheral", "otg", "wakeup";
phys = <&usb3_phy>;
phy-names = "cdns3,usb3-phy";
cdns,on-chip-buff-size = /bits/ 16 <18>;
status = "disabled";
};
};

View File

@ -98,11 +98,17 @@ mdio {
#address-cells = <1>;
#size-cells = <0>;
ethphy: ethernet-phy@4 {
ethphy: ethernet-phy@4 { /* AR8033 or ADIN1300 */
compatible = "ethernet-phy-ieee802.3-c22";
reg = <4>;
reset-gpios = <&gpio1 9 GPIO_ACTIVE_LOW>;
reset-assert-us = <10000>;
/*
* Deassert delay:
* ADIN1300 requires 5ms.
* AR8033 requires 1ms.
*/
reset-deassert-us = <20000>;
};
};
};

View File

@ -1069,13 +1069,6 @@ lcdif: lcdif@32e00000 {
<&clk IMX8MN_CLK_DISP_APB_ROOT>,
<&clk IMX8MN_CLK_DISP_AXI_ROOT>;
clock-names = "pix", "axi", "disp_axi";
assigned-clocks = <&clk IMX8MN_CLK_DISP_PIXEL_ROOT>,
<&clk IMX8MN_CLK_DISP_AXI>,
<&clk IMX8MN_CLK_DISP_APB>;
assigned-clock-parents = <&clk IMX8MN_CLK_DISP_PIXEL>,
<&clk IMX8MN_SYS_PLL2_1000M>,
<&clk IMX8MN_SYS_PLL1_800M>;
assigned-clock-rates = <594000000>, <500000000>, <200000000>;
interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
power-domains = <&disp_blk_ctrl IMX8MN_DISPBLK_PD_LCDIF>;
status = "disabled";
@ -1093,12 +1086,6 @@ mipi_dsi: dsi@32e10000 {
clocks = <&clk IMX8MN_CLK_DSI_CORE>,
<&clk IMX8MN_CLK_DSI_PHY_REF>;
clock-names = "bus_clk", "sclk_mipi";
assigned-clocks = <&clk IMX8MN_CLK_DSI_CORE>,
<&clk IMX8MN_CLK_DSI_PHY_REF>;
assigned-clock-parents = <&clk IMX8MN_SYS_PLL1_266M>,
<&clk IMX8MN_CLK_24M>;
assigned-clock-rates = <266000000>, <24000000>;
samsung,pll-clock-frequency = <24000000>;
interrupts = <GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>;
power-domains = <&disp_blk_ctrl IMX8MN_DISPBLK_PD_MIPI_DSI>;
status = "disabled";
@ -1142,6 +1129,21 @@ disp_blk_ctrl: blk-ctrl@32e28000 {
"lcdif-axi", "lcdif-apb", "lcdif-pix",
"dsi-pclk", "dsi-ref",
"csi-aclk", "csi-pclk";
assigned-clocks = <&clk IMX8MN_CLK_DSI_CORE>,
<&clk IMX8MN_CLK_DSI_PHY_REF>,
<&clk IMX8MN_CLK_DISP_PIXEL>,
<&clk IMX8MN_CLK_DISP_AXI>,
<&clk IMX8MN_CLK_DISP_APB>;
assigned-clock-parents = <&clk IMX8MN_SYS_PLL1_266M>,
<&clk IMX8MN_CLK_24M>,
<&clk IMX8MN_VIDEO_PLL1_OUT>,
<&clk IMX8MN_SYS_PLL2_1000M>,
<&clk IMX8MN_SYS_PLL1_800M>;
assigned-clock-rates = <266000000>,
<24000000>,
<594000000>,
<500000000>,
<200000000>;
#power-domain-cells = <1>;
};

View File

@ -1211,13 +1211,6 @@ lcdif1: display-controller@32e80000 {
<&clk IMX8MP_CLK_MEDIA_APB_ROOT>,
<&clk IMX8MP_CLK_MEDIA_AXI_ROOT>;
clock-names = "pix", "axi", "disp_axi";
assigned-clocks = <&clk IMX8MP_CLK_MEDIA_DISP1_PIX_ROOT>,
<&clk IMX8MP_CLK_MEDIA_AXI>,
<&clk IMX8MP_CLK_MEDIA_APB>;
assigned-clock-parents = <&clk IMX8MP_CLK_MEDIA_DISP1_PIX>,
<&clk IMX8MP_SYS_PLL2_1000M>,
<&clk IMX8MP_SYS_PLL1_800M>;
assigned-clock-rates = <594000000>, <500000000>, <200000000>;
interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
power-domains = <&media_blk_ctrl IMX8MP_MEDIABLK_PD_LCDIF_1>;
status = "disabled";
@ -1237,11 +1230,6 @@ lcdif2: display-controller@32e90000 {
<&clk IMX8MP_CLK_MEDIA_APB_ROOT>,
<&clk IMX8MP_CLK_MEDIA_AXI_ROOT>;
clock-names = "pix", "axi", "disp_axi";
assigned-clocks = <&clk IMX8MP_CLK_MEDIA_DISP2_PIX>,
<&clk IMX8MP_VIDEO_PLL1>;
assigned-clock-parents = <&clk IMX8MP_VIDEO_PLL1_OUT>,
<&clk IMX8MP_VIDEO_PLL1_REF_SEL>;
assigned-clock-rates = <0>, <1039500000>;
power-domains = <&media_blk_ctrl IMX8MP_MEDIABLK_PD_LCDIF_2>;
status = "disabled";
@ -1296,11 +1284,16 @@ media_blk_ctrl: blk-ctrl@32ec0000 {
"disp1", "disp2", "isp", "phy";
assigned-clocks = <&clk IMX8MP_CLK_MEDIA_AXI>,
<&clk IMX8MP_CLK_MEDIA_APB>;
<&clk IMX8MP_CLK_MEDIA_APB>,
<&clk IMX8MP_CLK_MEDIA_DISP1_PIX>,
<&clk IMX8MP_CLK_MEDIA_DISP2_PIX>,
<&clk IMX8MP_VIDEO_PLL1>;
assigned-clock-parents = <&clk IMX8MP_SYS_PLL2_1000M>,
<&clk IMX8MP_SYS_PLL1_800M>;
assigned-clock-rates = <500000000>, <200000000>;
<&clk IMX8MP_SYS_PLL1_800M>,
<&clk IMX8MP_VIDEO_PLL1_OUT>,
<&clk IMX8MP_VIDEO_PLL1_OUT>;
assigned-clock-rates = <500000000>, <200000000>,
<0>, <0>, <1039500000>;
#power-domain-cells = <1>;
lvds_bridge: bridge@5c {

View File

@ -33,6 +33,12 @@ rtc_i2c: rtc@68 {
};
};
&iomuxc {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_ext_io0>, <&pinctrl_hog0>, <&pinctrl_hog1>,
<&pinctrl_lpspi2_cs2>;
};
/* Colibri SPI */
&lpspi2 {
status = "okay";

View File

@ -48,8 +48,7 @@ pinctrl_gpio_iris: gpioirisgrp {
<IMX8QXP_SAI0_TXFS_LSIO_GPIO0_IO28 0x20>, /* SODIMM 101 */
<IMX8QXP_SAI0_RXD_LSIO_GPIO0_IO27 0x20>, /* SODIMM 97 */
<IMX8QXP_ENET0_RGMII_RXC_LSIO_GPIO5_IO03 0x06000020>, /* SODIMM 85 */
<IMX8QXP_SAI0_TXC_LSIO_GPIO0_IO26 0x20>, /* SODIMM 79 */
<IMX8QXP_QSPI0A_DATA1_LSIO_GPIO3_IO10 0x06700041>; /* SODIMM 45 */
<IMX8QXP_SAI0_TXC_LSIO_GPIO0_IO26 0x20>; /* SODIMM 79 */
};
pinctrl_uart1_forceoff: uart1forceoffgrp {

View File

@ -363,10 +363,6 @@ &usdhc2 {
/* TODO VPU Encoder/Decoder */
&iomuxc {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_ext_io0>, <&pinctrl_hog0>, <&pinctrl_hog1>,
<&pinctrl_hog2>, <&pinctrl_lpspi2_cs2>;
/* On-module touch pen-down interrupt */
pinctrl_ad7879_int: ad7879intgrp {
fsl,pins = <IMX8QXP_MIPI_CSI0_I2C0_SCL_LSIO_GPIO3_IO05 0x21>;
@ -499,8 +495,7 @@ pinctrl_hog0: hog0grp {
};
pinctrl_hog1: hog1grp {
fsl,pins = <IMX8QXP_CSI_MCLK_LSIO_GPIO3_IO01 0x20>, /* SODIMM 75 */
<IMX8QXP_QSPI0A_SCLK_LSIO_GPIO3_IO16 0x20>; /* SODIMM 93 */
fsl,pins = <IMX8QXP_QSPI0A_SCLK_LSIO_GPIO3_IO16 0x20>; /* SODIMM 93 */
};
pinctrl_hog2: hog2grp {
@ -774,3 +769,10 @@ pinctrl_wifi: wifigrp {
fsl,pins = <IMX8QXP_SCU_BOOT_MODE3_SCU_DSC_RTC_CLOCK_OUTPUT_32K 0x20>;
};
};
/* Delete peripherals which are not present on SOC, but are defined in imx8-ss-*.dtsi */
/delete-node/ &adc1;
/delete-node/ &adc1_lpcg;
/delete-node/ &dsp;
/delete-node/ &dsp_lpcg;

View File

@ -79,6 +79,7 @@ config MIPS
select HAVE_LD_DEAD_CODE_DATA_ELIMINATION
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
select HAVE_PATA_PLATFORM
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP

View File

@ -30,6 +30,7 @@
*
*/
#include <linux/dma-map-ops.h> /* for dma_default_coherent */
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/slab.h>
@ -623,17 +624,18 @@ u32 au1xxx_dbdma_put_source(u32 chanid, dma_addr_t buf, int nbytes, u32 flags)
dp->dscr_cmd0 &= ~DSCR_CMD0_IE;
/*
* There is an errata on the Au1200/Au1550 parts that could result
* in "stale" data being DMA'ed. It has to do with the snoop logic on
* the cache eviction buffer. DMA_NONCOHERENT is on by default for
* these parts. If it is fixed in the future, these dma_cache_inv will
* just be nothing more than empty macros. See io.h.
* There is an erratum on certain Au1200/Au1550 revisions that could
* result in "stale" data being DMA'ed. It has to do with the snoop
* logic on the cache eviction buffer. dma_default_coherent is set
* to false on these parts.
*/
dma_cache_wback_inv((unsigned long)buf, nbytes);
if (!dma_default_coherent)
dma_cache_wback_inv(KSEG0ADDR(buf), nbytes);
dp->dscr_cmd0 |= DSCR_CMD0_V; /* Let it rip */
wmb(); /* drain writebuffer */
dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
ctp->chan_ptr->ddma_dbell = 0;
wmb(); /* force doorbell write out to dma engine */
/* Get next descriptor pointer. */
ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));
@ -685,17 +687,18 @@ u32 au1xxx_dbdma_put_dest(u32 chanid, dma_addr_t buf, int nbytes, u32 flags)
dp->dscr_source1, dp->dscr_dest0, dp->dscr_dest1);
#endif
/*
* There is an errata on the Au1200/Au1550 parts that could result in
* "stale" data being DMA'ed. It has to do with the snoop logic on the
* cache eviction buffer. DMA_NONCOHERENT is on by default for these
* parts. If it is fixed in the future, these dma_cache_inv will just
* be nothing more than empty macros. See io.h.
* There is an erratum on certain Au1200/Au1550 revisions that could
* result in "stale" data being DMA'ed. It has to do with the snoop
* logic on the cache eviction buffer. dma_default_coherent is set
* to false on these parts.
*/
dma_cache_inv((unsigned long)buf, nbytes);
if (!dma_default_coherent)
dma_cache_inv(KSEG0ADDR(buf), nbytes);
dp->dscr_cmd0 |= DSCR_CMD0_V; /* Let it rip */
wmb(); /* drain writebuffer */
dma_cache_wback_inv((unsigned long)dp, sizeof(*dp));
ctp->chan_ptr->ddma_dbell = 0;
wmb(); /* force doorbell write out to dma engine */
/* Get next descriptor pointer. */
ctp->put_ptr = phys_to_virt(DSCR_GET_NXTPTR(dp->dscr_nxtptr));

View File

@ -1502,6 +1502,10 @@ static inline void cpu_probe_alchemy(struct cpuinfo_mips *c, unsigned int cpu)
break;
}
break;
case PRID_IMP_NETLOGIC_AU13XX:
c->cputype = CPU_ALCHEMY;
__cpu_name[cpu] = "Au1300";
break;
}
}
@ -1863,6 +1867,7 @@ void cpu_probe(void)
cpu_probe_mips(c, cpu);
break;
case PRID_COMP_ALCHEMY:
case PRID_COMP_NETLOGIC:
cpu_probe_alchemy(c, cpu);
break;
case PRID_COMP_SIBYTE:

View File

@ -158,10 +158,6 @@ static unsigned long __init init_initrd(void)
pr_err("initrd start must be page aligned\n");
goto disable;
}
if (initrd_start < PAGE_OFFSET) {
pr_err("initrd start < PAGE_OFFSET\n");
goto disable;
}
/*
* Sanitize initrd addresses. For example firmware
@ -174,6 +170,11 @@ static unsigned long __init init_initrd(void)
initrd_end = (unsigned long)__va(end);
initrd_start = (unsigned long)__va(__pa(initrd_start));
if (initrd_start < PAGE_OFFSET) {
pr_err("initrd start < PAGE_OFFSET\n");
goto disable;
}
ROOT_DEV = Root_RAM0;
return PFN_UP(end);
disable:

View File

@ -130,6 +130,10 @@ config PM
config STACKTRACE_SUPPORT
def_bool y
config LOCKDEP_SUPPORT
bool
default y
config ISA_DMA_API
bool

View File

@ -1 +1,12 @@
# SPDX-License-Identifier: GPL-2.0
#
config LIGHTWEIGHT_SPINLOCK_CHECK
bool "Enable lightweight spinlock checks"
depends on SMP && !DEBUG_SPINLOCK
default y
help
Add checks with low performance impact to the spinlock functions
to catch memory overwrites at runtime. For more advanced
spinlock debugging you should choose the DEBUG_SPINLOCK option
which will detect unitialized spinlocks too.
If unsure say Y here.

View File

@ -48,6 +48,10 @@ void flush_dcache_page(struct page *page);
#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages)
#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages)
#define flush_dcache_mmap_lock_irqsave(mapping, flags) \
xa_lock_irqsave(&mapping->i_pages, flags)
#define flush_dcache_mmap_unlock_irqrestore(mapping, flags) \
xa_unlock_irqrestore(&mapping->i_pages, flags)
#define flush_icache_page(vma,page) do { \
flush_kernel_dcache_page_addr(page_address(page)); \

View File

@ -7,10 +7,26 @@
#include <asm/processor.h>
#include <asm/spinlock_types.h>
#define SPINLOCK_BREAK_INSN 0x0000c006 /* break 6,6 */
static inline void arch_spin_val_check(int lock_val)
{
if (IS_ENABLED(CONFIG_LIGHTWEIGHT_SPINLOCK_CHECK))
asm volatile( "andcm,= %0,%1,%%r0\n"
".word %2\n"
: : "r" (lock_val), "r" (__ARCH_SPIN_LOCK_UNLOCKED_VAL),
"i" (SPINLOCK_BREAK_INSN));
}
static inline int arch_spin_is_locked(arch_spinlock_t *x)
{
volatile unsigned int *a = __ldcw_align(x);
return READ_ONCE(*a) == 0;
volatile unsigned int *a;
int lock_val;
a = __ldcw_align(x);
lock_val = READ_ONCE(*a);
arch_spin_val_check(lock_val);
return (lock_val == 0);
}
static inline void arch_spin_lock(arch_spinlock_t *x)
@ -18,9 +34,18 @@ static inline void arch_spin_lock(arch_spinlock_t *x)
volatile unsigned int *a;
a = __ldcw_align(x);
while (__ldcw(a) == 0)
do {
int lock_val_old;
lock_val_old = __ldcw(a);
arch_spin_val_check(lock_val_old);
if (lock_val_old)
return; /* got lock */
/* wait until we should try to get lock again */
while (*a == 0)
continue;
} while (1);
}
static inline void arch_spin_unlock(arch_spinlock_t *x)
@ -29,15 +54,19 @@ static inline void arch_spin_unlock(arch_spinlock_t *x)
a = __ldcw_align(x);
/* Release with ordered store. */
__asm__ __volatile__("stw,ma %0,0(%1)" : : "r"(1), "r"(a) : "memory");
__asm__ __volatile__("stw,ma %0,0(%1)"
: : "r"(__ARCH_SPIN_LOCK_UNLOCKED_VAL), "r"(a) : "memory");
}
static inline int arch_spin_trylock(arch_spinlock_t *x)
{
volatile unsigned int *a;
int lock_val;
a = __ldcw_align(x);
return __ldcw(a) != 0;
lock_val = __ldcw(a);
arch_spin_val_check(lock_val);
return lock_val != 0;
}
/*

View File

@ -2,13 +2,17 @@
#ifndef __ASM_SPINLOCK_TYPES_H
#define __ASM_SPINLOCK_TYPES_H
#define __ARCH_SPIN_LOCK_UNLOCKED_VAL 0x1a46
typedef struct {
#ifdef CONFIG_PA20
volatile unsigned int slock;
# define __ARCH_SPIN_LOCK_UNLOCKED { 1 }
# define __ARCH_SPIN_LOCK_UNLOCKED { __ARCH_SPIN_LOCK_UNLOCKED_VAL }
#else
volatile unsigned int lock[4];
# define __ARCH_SPIN_LOCK_UNLOCKED { { 1, 1, 1, 1 } }
# define __ARCH_SPIN_LOCK_UNLOCKED \
{ { __ARCH_SPIN_LOCK_UNLOCKED_VAL, __ARCH_SPIN_LOCK_UNLOCKED_VAL, \
__ARCH_SPIN_LOCK_UNLOCKED_VAL, __ARCH_SPIN_LOCK_UNLOCKED_VAL } }
#endif
} arch_spinlock_t;

View File

@ -25,7 +25,7 @@ void __init_or_module apply_alternatives(struct alt_instr *start,
{
struct alt_instr *entry;
int index = 0, applied = 0;
int num_cpus = num_online_cpus();
int num_cpus = num_present_cpus();
u16 cond_check;
cond_check = ALT_COND_ALWAYS |

View File

@ -399,6 +399,7 @@ void flush_dcache_page(struct page *page)
unsigned long offset;
unsigned long addr, old_addr = 0;
unsigned long count = 0;
unsigned long flags;
pgoff_t pgoff;
if (mapping && !mapping_mapped(mapping)) {
@ -420,7 +421,7 @@ void flush_dcache_page(struct page *page)
* to flush one address here for them all to become coherent
* on machines that support equivalent aliasing
*/
flush_dcache_mmap_lock(mapping);
flush_dcache_mmap_lock_irqsave(mapping, flags);
vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) {
offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;
addr = mpnt->vm_start + offset;
@ -460,7 +461,7 @@ void flush_dcache_page(struct page *page)
}
WARN_ON(++count == 4096);
}
flush_dcache_mmap_unlock(mapping);
flush_dcache_mmap_unlock_irqrestore(mapping, flags);
}
EXPORT_SYMBOL(flush_dcache_page);

View File

@ -446,11 +446,27 @@ void arch_dma_free(struct device *dev, size_t size, void *vaddr,
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
/*
* fdc: The data cache line is written back to memory, if and only if
* it is dirty, and then invalidated from the data cache.
*/
flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
}
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
enum dma_data_direction dir)
{
flush_kernel_dcache_range((unsigned long)phys_to_virt(paddr), size);
unsigned long addr = (unsigned long) phys_to_virt(paddr);
switch (dir) {
case DMA_TO_DEVICE:
case DMA_BIDIRECTIONAL:
flush_kernel_dcache_range(addr, size);
return;
case DMA_FROM_DEVICE:
purge_kernel_dcache_range_asm(addr, addr + size);
return;
default:
BUG();
}
}

View File

@ -122,13 +122,18 @@ void machine_power_off(void)
/* It seems we have no way to power the system off via
* software. The user has to press the button himself. */
printk(KERN_EMERG "System shut down completed.\n"
"Please power this system off now.");
printk("Power off or press RETURN to reboot.\n");
/* prevent soft lockup/stalled CPU messages for endless loop. */
rcu_sysrq_start();
lockup_detector_soft_poweroff();
for (;;);
while (1) {
/* reboot if user presses RETURN key */
if (pdc_iodc_getc() == 13) {
printk("Rebooting...\n");
machine_restart(NULL);
}
}
}
void (*pm_power_off)(void);

View File

@ -47,6 +47,10 @@
#include <linux/kgdb.h>
#include <linux/kprobes.h>
#if defined(CONFIG_LIGHTWEIGHT_SPINLOCK_CHECK)
#include <asm/spinlock.h>
#endif
#include "../math-emu/math-emu.h" /* for handle_fpe() */
static void parisc_show_stack(struct task_struct *task,
@ -291,24 +295,30 @@ static void handle_break(struct pt_regs *regs)
}
#ifdef CONFIG_KPROBES
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN)) {
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN && !user_mode(regs))) {
parisc_kprobe_break_handler(regs);
return;
}
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2)) {
if (unlikely(iir == PARISC_KPROBES_BREAK_INSN2 && !user_mode(regs))) {
parisc_kprobe_ss_handler(regs);
return;
}
#endif
#ifdef CONFIG_KGDB
if (unlikely(iir == PARISC_KGDB_COMPILED_BREAK_INSN ||
iir == PARISC_KGDB_BREAK_INSN)) {
if (unlikely((iir == PARISC_KGDB_COMPILED_BREAK_INSN ||
iir == PARISC_KGDB_BREAK_INSN)) && !user_mode(regs)) {
kgdb_handle_exception(9, SIGTRAP, 0, regs);
return;
}
#endif
#ifdef CONFIG_LIGHTWEIGHT_SPINLOCK_CHECK
if ((iir == SPINLOCK_BREAK_INSN) && !user_mode(regs)) {
die_if_kernel("Spinlock was trashed", regs, 1);
}
#endif
if (unlikely(iir != GDB_BREAK_INSN))
parisc_printk_ratelimited(0, regs,
KERN_DEBUG "break %d,%d: pid=%d command='%s'\n",

View File

@ -906,11 +906,17 @@ config DATA_SHIFT
config ARCH_FORCE_MAX_ORDER
int "Order of maximal physically contiguous allocations"
range 7 8 if PPC64 && PPC_64K_PAGES
default "8" if PPC64 && PPC_64K_PAGES
range 12 12 if PPC64 && !PPC_64K_PAGES
default "12" if PPC64 && !PPC_64K_PAGES
range 8 10 if PPC32 && PPC_16K_PAGES
default "8" if PPC32 && PPC_16K_PAGES
range 6 10 if PPC32 && PPC_64K_PAGES
default "6" if PPC32 && PPC_64K_PAGES
range 4 10 if PPC32 && PPC_256K_PAGES
default "4" if PPC32 && PPC_256K_PAGES
range 10 10
default "10"
help
The kernel page allocator limits the size of maximal physically

View File

@ -773,8 +773,6 @@
.octa 0x3F893781E95FE1576CDA64D2BA0CB204
#ifdef CONFIG_AS_GFNI
.section .rodata.cst8, "aM", @progbits, 8
.align 8
/* AES affine: */
#define tf_aff_const BV8(1, 1, 0, 0, 0, 1, 1, 0)
.Ltf_aff_bitmatrix:

View File

@ -4074,7 +4074,7 @@ static struct perf_guest_switch_msr *intel_guest_get_msrs(int *nr, void *data)
if (x86_pmu.intel_cap.pebs_baseline) {
arr[(*nr)++] = (struct perf_guest_switch_msr){
.msr = MSR_PEBS_DATA_CFG,
.host = cpuc->pebs_data_cfg,
.host = cpuc->active_pebs_data_cfg,
.guest = kvm_pmu->pebs_data_cfg,
};
}

View File

@ -6150,6 +6150,7 @@ static struct intel_uncore_type spr_uncore_mdf = {
};
#define UNCORE_SPR_NUM_UNCORE_TYPES 12
#define UNCORE_SPR_CHA 0
#define UNCORE_SPR_IIO 1
#define UNCORE_SPR_IMC 6
#define UNCORE_SPR_UPI 8
@ -6460,12 +6461,22 @@ static int uncore_type_max_boxes(struct intel_uncore_type **types,
return max + 1;
}
#define SPR_MSR_UNC_CBO_CONFIG 0x2FFE
void spr_uncore_cpu_init(void)
{
struct intel_uncore_type *type;
u64 num_cbo;
uncore_msr_uncores = uncore_get_uncores(UNCORE_ACCESS_MSR,
UNCORE_SPR_MSR_EXTRA_UNCORES,
spr_msr_uncores);
type = uncore_find_type_by_id(uncore_msr_uncores, UNCORE_SPR_CHA);
if (type) {
rdmsrl(SPR_MSR_UNC_CBO_CONFIG, num_cbo);
type->num_boxes = num_cbo;
}
spr_uncore_iio_free_running.num_boxes = uncore_type_max_boxes(uncore_msr_uncores, UNCORE_SPR_IIO);
}

View File

@ -39,7 +39,7 @@ extern void fpu_flush_thread(void);
static inline void switch_fpu_prepare(struct fpu *old_fpu, int cpu)
{
if (cpu_feature_enabled(X86_FEATURE_FPU) &&
!(current->flags & (PF_KTHREAD | PF_IO_WORKER))) {
!(current->flags & (PF_KTHREAD | PF_USER_WORKER))) {
save_fpregs_to_fpstate(old_fpu);
/*
* The save operation preserved register state, so the

View File

@ -79,7 +79,7 @@ int detect_extended_topology_early(struct cpuinfo_x86 *c)
* initial apic id, which also represents 32-bit extended x2apic id.
*/
c->initial_apicid = edx;
smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx));
#endif
return 0;
}
@ -109,7 +109,8 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
*/
cpuid_count(leaf, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
c->initial_apicid = edx;
core_level_siblings = smp_num_siblings = LEVEL_MAX_SIBLINGS(ebx);
core_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
smp_num_siblings = max_t(int, smp_num_siblings, LEVEL_MAX_SIBLINGS(ebx));
core_plus_mask_width = ht_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);
die_level_siblings = LEVEL_MAX_SIBLINGS(ebx);
pkg_mask_width = die_plus_mask_width = BITS_SHIFT_NEXT_LEVEL(eax);

View File

@ -195,7 +195,6 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
printk("%sCall Trace:\n", log_lvl);
unwind_start(&state, task, regs, stack);
stack = stack ? : get_stack_pointer(task, regs);
regs = unwind_get_entry_regs(&state, &partial);
/*
@ -214,9 +213,13 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
* - hardirq stack
* - entry stack
*/
for ( ; stack; stack = PTR_ALIGN(stack_info.next_sp, sizeof(long))) {
for (stack = stack ?: get_stack_pointer(task, regs);
stack;
stack = stack_info.next_sp) {
const char *stack_name;
stack = PTR_ALIGN(stack, sizeof(long));
if (get_stack_info(stack, task, &stack_info, &visit_mask)) {
/*
* We weren't on a valid stack. It's possible that

View File

@ -57,7 +57,7 @@ static inline void fpregs_restore_userregs(void)
struct fpu *fpu = &current->thread.fpu;
int cpu = smp_processor_id();
if (WARN_ON_ONCE(current->flags & (PF_KTHREAD | PF_IO_WORKER)))
if (WARN_ON_ONCE(current->flags & (PF_KTHREAD | PF_USER_WORKER)))
return;
if (!fpregs_state_valid(fpu, cpu)) {

View File

@ -426,7 +426,7 @@ void kernel_fpu_begin_mask(unsigned int kfpu_mask)
this_cpu_write(in_kernel_fpu, true);
if (!(current->flags & (PF_KTHREAD | PF_IO_WORKER)) &&
if (!(current->flags & (PF_KTHREAD | PF_USER_WORKER)) &&
!test_thread_flag(TIF_NEED_FPU_LOAD)) {
set_thread_flag(TIF_NEED_FPU_LOAD);
save_fpregs_to_fpstate(&current->thread.fpu);

View File

@ -7,6 +7,8 @@
*/
#include <linux/linkage.h>
#include <asm/cpufeatures.h>
#include <asm/alternative.h>
#include <asm/asm.h>
#include <asm/export.h>
@ -29,7 +31,7 @@
*/
SYM_FUNC_START(rep_movs_alternative)
cmpq $64,%rcx
jae .Lunrolled
jae .Llarge
cmp $8,%ecx
jae .Lword
@ -65,6 +67,12 @@ SYM_FUNC_START(rep_movs_alternative)
_ASM_EXTABLE_UA( 2b, .Lcopy_user_tail)
_ASM_EXTABLE_UA( 3b, .Lcopy_user_tail)
.Llarge:
0: ALTERNATIVE "jmp .Lunrolled", "rep movsb", X86_FEATURE_ERMS
1: RET
_ASM_EXTABLE_UA( 0b, 1b)
.p2align 4
.Lunrolled:
10: movq (%rsi),%r8

View File

@ -198,7 +198,7 @@ static int xen_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
i++;
}
kfree(v);
return 0;
return msi_device_populate_sysfs(&dev->dev);
error:
if (ret == -ENOSYS)
@ -254,7 +254,7 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
dev_dbg(&dev->dev,
"xen: msi --> pirq=%d --> irq=%d\n", pirq, irq);
}
return 0;
return msi_device_populate_sysfs(&dev->dev);
error:
dev_err(&dev->dev, "Failed to create MSI%s! ret=%d!\n",
@ -346,7 +346,7 @@ static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
if (ret < 0)
goto out;
}
ret = 0;
ret = msi_device_populate_sysfs(&dev->dev);
out:
return ret;
}
@ -394,6 +394,8 @@ static void xen_teardown_msi_irqs(struct pci_dev *dev)
xen_destroy_irq(msidesc->irq + i);
msidesc->irq = 0;
}
msi_device_destroy_sysfs(&dev->dev);
}
static void xen_pv_teardown_msi_irqs(struct pci_dev *dev)

View File

@ -520,7 +520,7 @@ static inline int bio_check_eod(struct bio *bio)
sector_t maxsector = bdev_nr_sectors(bio->bi_bdev);
unsigned int nr_sectors = bio_sectors(bio);
if (nr_sectors && maxsector &&
if (nr_sectors &&
(nr_sectors > maxsector ||
bio->bi_iter.bi_sector > maxsector - nr_sectors)) {
pr_info_ratelimited("%s: attempt to access beyond end of device\n"

View File

@ -248,7 +248,7 @@ static struct bio *blk_rq_map_bio_alloc(struct request *rq,
{
struct bio *bio;
if (rq->cmd_flags & REQ_ALLOC_CACHE) {
if (rq->cmd_flags & REQ_ALLOC_CACHE && (nr_vecs <= BIO_INLINE_VECS)) {
bio = bio_alloc_bioset(NULL, nr_vecs, rq->cmd_flags, gfp_mask,
&fs_bio_set);
if (!bio)

View File

@ -39,16 +39,20 @@ void __blk_mq_tag_busy(struct blk_mq_hw_ctx *hctx)
{
unsigned int users;
/*
* calling test_bit() prior to test_and_set_bit() is intentional,
* it avoids dirtying the cacheline if the queue is already active.
*/
if (blk_mq_is_shared_tags(hctx->flags)) {
struct request_queue *q = hctx->queue;
if (test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags))
if (test_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags) ||
test_and_set_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags))
return;
set_bit(QUEUE_FLAG_HCTX_ACTIVE, &q->queue_flags);
} else {
if (test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
if (test_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state) ||
test_and_set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state))
return;
set_bit(BLK_MQ_S_TAG_ACTIVE, &hctx->state);
}
users = atomic_inc_return(&hctx->tags->active_queues);

View File

@ -730,14 +730,16 @@ void wbt_enable_default(struct gendisk *disk)
{
struct request_queue *q = disk->queue;
struct rq_qos *rqos;
bool disable_flag = q->elevator &&
test_bit(ELEVATOR_FLAG_DISABLE_WBT, &q->elevator->flags);
bool enable = IS_ENABLED(CONFIG_BLK_WBT_MQ);
if (q->elevator &&
test_bit(ELEVATOR_FLAG_DISABLE_WBT, &q->elevator->flags))
enable = false;
/* Throttling already enabled? */
rqos = wbt_rq_qos(q);
if (rqos) {
if (!disable_flag &&
RQWB(rqos)->enable_state == WBT_STATE_OFF_DEFAULT)
if (enable && RQWB(rqos)->enable_state == WBT_STATE_OFF_DEFAULT)
RQWB(rqos)->enable_state = WBT_STATE_ON_DEFAULT;
return;
}
@ -746,7 +748,7 @@ void wbt_enable_default(struct gendisk *disk)
if (!blk_queue_registered(q))
return;
if (queue_is_mq(q) && !disable_flag)
if (queue_is_mq(q) && enable)
wbt_init(disk);
}
EXPORT_SYMBOL_GPL(wbt_enable_default);

View File

@ -997,14 +997,34 @@ static void *msg_xfer(struct qaic_device *qdev, struct wrapper_list *wrappers, u
struct xfer_queue_elem elem;
struct wire_msg *out_buf;
struct wrapper_msg *w;
long ret = -EAGAIN;
int xfer_count = 0;
int retry_count;
long ret;
if (qdev->in_reset) {
mutex_unlock(&qdev->cntl_mutex);
return ERR_PTR(-ENODEV);
}
/* Attempt to avoid a partial commit of a message */
list_for_each_entry(w, &wrappers->list, list)
xfer_count++;
for (retry_count = 0; retry_count < QAIC_MHI_RETRY_MAX; retry_count++) {
if (xfer_count <= mhi_get_free_desc_count(qdev->cntl_ch, DMA_TO_DEVICE)) {
ret = 0;
break;
}
msleep_interruptible(QAIC_MHI_RETRY_WAIT_MS);
if (signal_pending(current))
break;
}
if (ret) {
mutex_unlock(&qdev->cntl_mutex);
return ERR_PTR(ret);
}
elem.seq_num = seq_num;
elem.buf = NULL;
init_completion(&elem.xfer_done);
@ -1038,16 +1058,9 @@ static void *msg_xfer(struct qaic_device *qdev, struct wrapper_list *wrappers, u
list_for_each_entry(w, &wrappers->list, list) {
kref_get(&w->ref_count);
retry_count = 0;
retry:
ret = mhi_queue_buf(qdev->cntl_ch, DMA_TO_DEVICE, &w->msg, w->len,
list_is_last(&w->list, &wrappers->list) ? MHI_EOT : MHI_CHAIN);
if (ret) {
if (ret == -EAGAIN && retry_count++ < QAIC_MHI_RETRY_MAX) {
msleep_interruptible(QAIC_MHI_RETRY_WAIT_MS);
if (!signal_pending(current))
goto retry;
}
qdev->cntl_lost_buf = true;
kref_put(&w->ref_count, free_wrapper);
mutex_unlock(&qdev->cntl_mutex);
@ -1249,7 +1262,7 @@ static int qaic_manage(struct qaic_device *qdev, struct qaic_user *usr, struct m
int qaic_manage_ioctl(struct drm_device *dev, void *data, struct drm_file *file_priv)
{
struct qaic_manage_msg *user_msg;
struct qaic_manage_msg *user_msg = data;
struct qaic_device *qdev;
struct manage_msg *msg;
struct qaic_user *usr;
@ -1258,6 +1271,9 @@ int qaic_manage_ioctl(struct drm_device *dev, void *data, struct drm_file *file_
int usr_rcu_id;
int ret;
if (user_msg->len > QAIC_MANAGE_MAX_MSG_LENGTH)
return -EINVAL;
usr = file_priv->driver_priv;
usr_rcu_id = srcu_read_lock(&usr->qddev_lock);
@ -1275,13 +1291,6 @@ int qaic_manage_ioctl(struct drm_device *dev, void *data, struct drm_file *file_
return -ENODEV;
}
user_msg = data;
if (user_msg->len > QAIC_MANAGE_MAX_MSG_LENGTH) {
ret = -EINVAL;
goto out;
}
msg = kzalloc(QAIC_MANAGE_MAX_MSG_LENGTH + sizeof(*msg), GFP_KERNEL);
if (!msg) {
ret = -ENOMEM;

View File

@ -591,7 +591,7 @@ static int qaic_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struc
struct qaic_bo *bo = to_qaic_bo(obj);
unsigned long offset = 0;
struct scatterlist *sg;
int ret;
int ret = 0;
if (obj->import_attach)
return -EINVAL;
@ -663,6 +663,10 @@ int qaic_create_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *fi
if (args->pad)
return -EINVAL;
size = PAGE_ALIGN(args->size);
if (size == 0)
return -EINVAL;
usr = file_priv->driver_priv;
usr_rcu_id = srcu_read_lock(&usr->qddev_lock);
if (!usr->qddev) {
@ -677,12 +681,6 @@ int qaic_create_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *fi
goto unlock_dev_srcu;
}
size = PAGE_ALIGN(args->size);
if (size == 0) {
ret = -EINVAL;
goto unlock_dev_srcu;
}
bo = qaic_alloc_init_bo();
if (IS_ERR(bo)) {
ret = PTR_ERR(bo);
@ -926,8 +924,8 @@ int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_fi
{
struct qaic_attach_slice_entry *slice_ent;
struct qaic_attach_slice *args = data;
int rcu_id, usr_rcu_id, qdev_rcu_id;
struct dma_bridge_chan *dbc;
int usr_rcu_id, qdev_rcu_id;
struct drm_gem_object *obj;
struct qaic_device *qdev;
unsigned long arg_size;
@ -936,6 +934,22 @@ int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_fi
struct qaic_bo *bo;
int ret;
if (args->hdr.count == 0)
return -EINVAL;
arg_size = args->hdr.count * sizeof(*slice_ent);
if (arg_size / args->hdr.count != sizeof(*slice_ent))
return -EINVAL;
if (args->hdr.size == 0)
return -EINVAL;
if (!(args->hdr.dir == DMA_TO_DEVICE || args->hdr.dir == DMA_FROM_DEVICE))
return -EINVAL;
if (args->data == 0)
return -EINVAL;
usr = file_priv->driver_priv;
usr_rcu_id = srcu_read_lock(&usr->qddev_lock);
if (!usr->qddev) {
@ -950,43 +964,11 @@ int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_fi
goto unlock_dev_srcu;
}
if (args->hdr.count == 0) {
ret = -EINVAL;
goto unlock_dev_srcu;
}
arg_size = args->hdr.count * sizeof(*slice_ent);
if (arg_size / args->hdr.count != sizeof(*slice_ent)) {
ret = -EINVAL;
goto unlock_dev_srcu;
}
if (args->hdr.dbc_id >= qdev->num_dbc) {
ret = -EINVAL;
goto unlock_dev_srcu;
}
if (args->hdr.size == 0) {
ret = -EINVAL;
goto unlock_dev_srcu;
}
if (!(args->hdr.dir == DMA_TO_DEVICE || args->hdr.dir == DMA_FROM_DEVICE)) {
ret = -EINVAL;
goto unlock_dev_srcu;
}
dbc = &qdev->dbc[args->hdr.dbc_id];
if (dbc->usr != usr) {
ret = -EINVAL;
goto unlock_dev_srcu;
}
if (args->data == 0) {
ret = -EINVAL;
goto unlock_dev_srcu;
}
user_data = u64_to_user_ptr(args->data);
slice_ent = kzalloc(arg_size, GFP_KERNEL);
@ -1013,9 +995,21 @@ int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_fi
bo = to_qaic_bo(obj);
if (bo->sliced) {
ret = -EINVAL;
goto put_bo;
}
dbc = &qdev->dbc[args->hdr.dbc_id];
rcu_id = srcu_read_lock(&dbc->ch_lock);
if (dbc->usr != usr) {
ret = -EINVAL;
goto unlock_ch_srcu;
}
ret = qaic_prepare_bo(qdev, bo, &args->hdr);
if (ret)
goto put_bo;
goto unlock_ch_srcu;
ret = qaic_attach_slicing_bo(qdev, bo, &args->hdr, slice_ent);
if (ret)
@ -1025,6 +1019,7 @@ int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_fi
dma_sync_sgtable_for_cpu(&qdev->pdev->dev, bo->sgt, args->hdr.dir);
bo->dbc = dbc;
srcu_read_unlock(&dbc->ch_lock, rcu_id);
drm_gem_object_put(obj);
srcu_read_unlock(&qdev->dev_lock, qdev_rcu_id);
srcu_read_unlock(&usr->qddev_lock, usr_rcu_id);
@ -1033,6 +1028,8 @@ int qaic_attach_slice_bo_ioctl(struct drm_device *dev, void *data, struct drm_fi
unprepare_bo:
qaic_unprepare_bo(qdev, bo);
unlock_ch_srcu:
srcu_read_unlock(&dbc->ch_lock, rcu_id);
put_bo:
drm_gem_object_put(obj);
free_slice_ent:
@ -1316,7 +1313,6 @@ static int __qaic_execute_bo_ioctl(struct drm_device *dev, void *data, struct dr
received_ts = ktime_get_ns();
size = is_partial ? sizeof(*pexec) : sizeof(*exec);
n = (unsigned long)size * args->hdr.count;
if (args->hdr.count == 0 || n / args->hdr.count != size)
return -EINVAL;
@ -1665,6 +1661,9 @@ int qaic_wait_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file
int rcu_id;
int ret;
if (args->pad != 0)
return -EINVAL;
usr = file_priv->driver_priv;
usr_rcu_id = srcu_read_lock(&usr->qddev_lock);
if (!usr->qddev) {
@ -1679,11 +1678,6 @@ int qaic_wait_bo_ioctl(struct drm_device *dev, void *data, struct drm_file *file
goto unlock_dev_srcu;
}
if (args->pad != 0) {
ret = -EINVAL;
goto unlock_dev_srcu;
}
if (args->dbc_id >= qdev->num_dbc) {
ret = -EINVAL;
goto unlock_dev_srcu;
@ -1855,6 +1849,11 @@ void wakeup_dbc(struct qaic_device *qdev, u32 dbc_id)
dbc->usr = NULL;
empty_xfer_list(qdev, dbc);
synchronize_srcu(&dbc->ch_lock);
/*
* Threads holding channel lock, may add more elements in the xfer_list.
* Flush out these elements from xfer_list.
*/
empty_xfer_list(qdev, dbc);
}
void release_dbc(struct qaic_device *qdev, u32 dbc_id)

View File

@ -262,8 +262,8 @@ static void qaic_destroy_drm_device(struct qaic_device *qdev, s32 partition_id)
static int qaic_mhi_probe(struct mhi_device *mhi_dev, const struct mhi_device_id *id)
{
u16 major = -1, minor = -1;
struct qaic_device *qdev;
u16 major, minor;
int ret;
/*

View File

@ -1934,24 +1934,23 @@ static void binder_deferred_fd_close(int fd)
static void binder_transaction_buffer_release(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_buffer *buffer,
binder_size_t failed_at,
binder_size_t off_end_offset,
bool is_failure)
{
int debug_id = buffer->debug_id;
binder_size_t off_start_offset, buffer_offset, off_end_offset;
binder_size_t off_start_offset, buffer_offset;
binder_debug(BINDER_DEBUG_TRANSACTION,
"%d buffer release %d, size %zd-%zd, failed at %llx\n",
proc->pid, buffer->debug_id,
buffer->data_size, buffer->offsets_size,
(unsigned long long)failed_at);
(unsigned long long)off_end_offset);
if (buffer->target_node)
binder_dec_node(buffer->target_node, 1, 0);
off_start_offset = ALIGN(buffer->data_size, sizeof(void *));
off_end_offset = is_failure && failed_at ? failed_at :
off_start_offset + buffer->offsets_size;
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
struct binder_object_header *hdr;
@ -2111,6 +2110,21 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
}
}
/* Clean up all the objects in the buffer */
static inline void binder_release_entire_buffer(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_buffer *buffer,
bool is_failure)
{
binder_size_t off_end_offset;
off_end_offset = ALIGN(buffer->data_size, sizeof(void *));
off_end_offset += buffer->offsets_size;
binder_transaction_buffer_release(proc, thread, buffer,
off_end_offset, is_failure);
}
static int binder_translate_binder(struct flat_binder_object *fp,
struct binder_transaction *t,
struct binder_thread *thread)
@ -2806,7 +2820,7 @@ static int binder_proc_transaction(struct binder_transaction *t,
t_outdated->buffer = NULL;
buffer->transaction = NULL;
trace_binder_transaction_update_buffer_release(buffer);
binder_transaction_buffer_release(proc, NULL, buffer, 0, 0);
binder_release_entire_buffer(proc, NULL, buffer, false);
binder_alloc_free_buf(&proc->alloc, buffer);
kfree(t_outdated);
binder_stats_deleted(BINDER_STAT_TRANSACTION);
@ -3775,7 +3789,7 @@ binder_free_buf(struct binder_proc *proc,
binder_node_inner_unlock(buf_node);
}
trace_binder_transaction_buffer_release(buffer);
binder_transaction_buffer_release(proc, thread, buffer, 0, is_failure);
binder_release_entire_buffer(proc, thread, buffer, is_failure);
binder_alloc_free_buf(&proc->alloc, buffer);
}

View File

@ -212,8 +212,8 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
mm = alloc->mm;
if (mm) {
mmap_read_lock(mm);
vma = vma_lookup(mm, alloc->vma_addr);
mmap_write_lock(mm);
vma = alloc->vma;
}
if (!vma && need_mm) {
@ -270,7 +270,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
trace_binder_alloc_page_end(alloc, index);
}
if (mm) {
mmap_read_unlock(mm);
mmap_write_unlock(mm);
mmput(mm);
}
return 0;
@ -303,21 +303,24 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
}
err_no_vma:
if (mm) {
mmap_read_unlock(mm);
mmap_write_unlock(mm);
mmput(mm);
}
return vma ? -ENOMEM : -ESRCH;
}
static inline void binder_alloc_set_vma(struct binder_alloc *alloc,
struct vm_area_struct *vma)
{
/* pairs with smp_load_acquire in binder_alloc_get_vma() */
smp_store_release(&alloc->vma, vma);
}
static inline struct vm_area_struct *binder_alloc_get_vma(
struct binder_alloc *alloc)
{
struct vm_area_struct *vma = NULL;
if (alloc->vma_addr)
vma = vma_lookup(alloc->mm, alloc->vma_addr);
return vma;
/* pairs with smp_store_release in binder_alloc_set_vma() */
return smp_load_acquire(&alloc->vma);
}
static bool debug_low_async_space_locked(struct binder_alloc *alloc, int pid)
@ -380,15 +383,13 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
size_t size, data_offsets_size;
int ret;
mmap_read_lock(alloc->mm);
/* Check binder_alloc is fully initialized */
if (!binder_alloc_get_vma(alloc)) {
mmap_read_unlock(alloc->mm);
binder_alloc_debug(BINDER_DEBUG_USER_ERROR,
"%d: binder_alloc_buf, no vma\n",
alloc->pid);
return ERR_PTR(-ESRCH);
}
mmap_read_unlock(alloc->mm);
data_offsets_size = ALIGN(data_size, sizeof(void *)) +
ALIGN(offsets_size, sizeof(void *));
@ -778,7 +779,9 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
buffer->free = 1;
binder_insert_free_buffer(alloc, buffer);
alloc->free_async_space = alloc->buffer_size / 2;
alloc->vma_addr = vma->vm_start;
/* Signal binder_alloc is fully initialized */
binder_alloc_set_vma(alloc, vma);
return 0;
@ -808,8 +811,7 @@ void binder_alloc_deferred_release(struct binder_alloc *alloc)
buffers = 0;
mutex_lock(&alloc->mutex);
BUG_ON(alloc->vma_addr &&
vma_lookup(alloc->mm, alloc->vma_addr));
BUG_ON(alloc->vma);
while ((n = rb_first(&alloc->allocated_buffers))) {
buffer = rb_entry(n, struct binder_buffer, rb_node);
@ -916,25 +918,17 @@ void binder_alloc_print_pages(struct seq_file *m,
* Make sure the binder_alloc is fully initialized, otherwise we might
* read inconsistent state.
*/
mmap_read_lock(alloc->mm);
if (binder_alloc_get_vma(alloc) == NULL) {
mmap_read_unlock(alloc->mm);
goto uninitialized;
if (binder_alloc_get_vma(alloc) != NULL) {
for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
page = &alloc->pages[i];
if (!page->page_ptr)
free++;
else if (list_empty(&page->lru))
active++;
else
lru++;
}
}
mmap_read_unlock(alloc->mm);
for (i = 0; i < alloc->buffer_size / PAGE_SIZE; i++) {
page = &alloc->pages[i];
if (!page->page_ptr)
free++;
else if (list_empty(&page->lru))
active++;
else
lru++;
}
uninitialized:
mutex_unlock(&alloc->mutex);
seq_printf(m, " pages: %d:%d:%d\n", active, lru, free);
seq_printf(m, " pages high watermark: %zu\n", alloc->pages_high);
@ -969,7 +963,7 @@ int binder_alloc_get_allocated_count(struct binder_alloc *alloc)
*/
void binder_alloc_vma_close(struct binder_alloc *alloc)
{
alloc->vma_addr = 0;
binder_alloc_set_vma(alloc, NULL);
}
/**

View File

@ -75,7 +75,7 @@ struct binder_lru_page {
/**
* struct binder_alloc - per-binder proc state for binder allocator
* @mutex: protects binder_alloc fields
* @vma_addr: vm_area_struct->vm_start passed to mmap_handler
* @vma: vm_area_struct passed to mmap_handler
* (invariant after mmap)
* @mm: copy of task->mm (invariant after open)
* @buffer: base of per-proc address space mapped via mmap
@ -99,7 +99,7 @@ struct binder_lru_page {
*/
struct binder_alloc {
struct mutex mutex;
unsigned long vma_addr;
struct vm_area_struct *vma;
struct mm_struct *mm;
void __user *buffer;
struct list_head buffers;

View File

@ -287,7 +287,7 @@ void binder_selftest_alloc(struct binder_alloc *alloc)
if (!binder_selftest_run)
return;
mutex_lock(&binder_selftest_lock);
if (!binder_selftest_run || !alloc->vma_addr)
if (!binder_selftest_run || !alloc->vma)
goto done;
pr_info("STARTED\n");
binder_selftest_alloc_offset(alloc, end_offset, 0);

View File

@ -2694,18 +2694,36 @@ static unsigned int atapi_xlat(struct ata_queued_cmd *qc)
return 0;
}
static struct ata_device *ata_find_dev(struct ata_port *ap, int devno)
static struct ata_device *ata_find_dev(struct ata_port *ap, unsigned int devno)
{
if (!sata_pmp_attached(ap)) {
if (likely(devno >= 0 &&
devno < ata_link_max_devices(&ap->link)))
/*
* For the non-PMP case, ata_link_max_devices() returns 1 (SATA case),
* or 2 (IDE master + slave case). However, the former case includes
* libsas hosted devices which are numbered per scsi host, leading
* to devno potentially being larger than 0 but with each struct
* ata_device having its own struct ata_port and struct ata_link.
* To accommodate these, ignore devno and always use device number 0.
*/
if (likely(!sata_pmp_attached(ap))) {
int link_max_devices = ata_link_max_devices(&ap->link);
if (link_max_devices == 1)
return &ap->link.device[0];
if (devno < link_max_devices)
return &ap->link.device[devno];
} else {
if (likely(devno >= 0 &&
devno < ap->nr_pmp_links))
return &ap->pmp_link[devno].device[0];
return NULL;
}
/*
* For PMP-attached devices, the device number corresponds to C
* (channel) of SCSI [H:C:I:L], indicating the port pmp link
* for the device.
*/
if (devno < ap->nr_pmp_links)
return &ap->pmp_link[devno].device[0];
return NULL;
}

View File

@ -4,16 +4,23 @@
# subsystems should select the appropriate symbols.
config REGMAP
bool "Register Map support" if KUNIT_ALL_TESTS
default y if (REGMAP_I2C || REGMAP_SPI || REGMAP_SPMI || REGMAP_W1 || REGMAP_AC97 || REGMAP_MMIO || REGMAP_IRQ || REGMAP_SOUNDWIRE || REGMAP_SOUNDWIRE_MBQ || REGMAP_SCCB || REGMAP_I3C || REGMAP_SPI_AVMM || REGMAP_MDIO || REGMAP_FSI)
select IRQ_DOMAIN if REGMAP_IRQ
select MDIO_BUS if REGMAP_MDIO
bool
help
Enable support for the Register Map (regmap) access API.
Usually, this option is automatically selected when needed.
However, you may want to enable it manually for running the regmap
KUnit tests.
If unsure, say N.
config REGMAP_KUNIT
tristate "KUnit tests for regmap"
depends on KUNIT
depends on KUNIT && REGMAP
default KUNIT_ALL_TESTS
select REGMAP
select REGMAP_RAM
config REGMAP_AC97

View File

@ -203,15 +203,18 @@ static int regcache_maple_sync(struct regmap *map, unsigned int min,
mas_for_each(&mas, entry, max) {
for (r = max(mas.index, lmin); r <= min(mas.last, lmax); r++) {
mas_pause(&mas);
rcu_read_unlock();
ret = regcache_sync_val(map, r, entry[r - mas.index]);
if (ret != 0)
goto out;
rcu_read_lock();
}
}
out:
rcu_read_unlock();
out:
map->cache_bypass = false;
return ret;

View File

@ -59,6 +59,10 @@ static int regmap_sdw_config_check(const struct regmap_config *config)
if (config->pad_bits != 0)
return -ENOTSUPP;
/* Only bulk writes are supported not multi-register writes */
if (config->can_multi_write)
return -ENOTSUPP;
return 0;
}

View File

@ -2082,6 +2082,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
size_t val_count = val_len / val_bytes;
size_t chunk_count, chunk_bytes;
size_t chunk_regs = val_count;
size_t max_data = map->max_raw_write - map->format.reg_bytes -
map->format.pad_bytes;
int ret, i;
if (!val_count)
@ -2089,8 +2091,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg,
if (map->use_single_write)
chunk_regs = 1;
else if (map->max_raw_write && val_len > map->max_raw_write)
chunk_regs = map->max_raw_write / val_bytes;
else if (map->max_raw_write && val_len > max_data)
chunk_regs = max_data / val_bytes;
chunk_count = val_count / chunk_regs;
chunk_bytes = chunk_regs * val_bytes;

View File

@ -780,7 +780,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
ring_req->u.rw.handle = info->handle;
ring_req->operation = rq_data_dir(req) ?
BLKIF_OP_WRITE : BLKIF_OP_READ;
if (req_op(req) == REQ_OP_FLUSH || req->cmd_flags & REQ_FUA) {
if (req_op(req) == REQ_OP_FLUSH ||
(req_op(req) == REQ_OP_WRITE && (req->cmd_flags & REQ_FUA))) {
/*
* Ideally we can do an unordered flush-to-disk.
* In case the backend onlysupports barriers, use that.

View File

@ -90,6 +90,9 @@ parisc_agp_tlbflush(struct agp_memory *mem)
{
struct _parisc_agp_info *info = &parisc_agp_info;
/* force fdc ops to be visible to IOMMU */
asm_io_sync();
writeq(info->gart_base | ilog2(info->gart_size), info->ioc_regs+IOC_PCOM);
readq(info->ioc_regs+IOC_PCOM); /* flush */
}
@ -158,6 +161,7 @@ parisc_agp_insert_memory(struct agp_memory *mem, off_t pg_start, int type)
info->gatt[j] =
parisc_agp_mask_memory(agp_bridge,
paddr, type);
asm_io_fdc(&info->gatt[j]);
}
}
@ -191,7 +195,16 @@ static unsigned long
parisc_agp_mask_memory(struct agp_bridge_data *bridge, dma_addr_t addr,
int type)
{
return SBA_PDIR_VALID_BIT | addr;
unsigned ci; /* coherent index */
dma_addr_t pa;
pa = addr & IOVP_MASK;
asm("lci 0(%1), %0" : "=r" (ci) : "r" (phys_to_virt(pa)));
pa |= (ci >> PAGE_SHIFT) & 0xff;/* move CI (8 bits) into lowest byte */
pa |= SBA_PDIR_VALID_BIT; /* set "valid" bit */
return cpu_to_le64(pa);
}
static void

View File

@ -444,9 +444,8 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy)
return 0;
}
static int amd_pstate_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
static int amd_pstate_update_freq(struct cpufreq_policy *policy,
unsigned int target_freq, bool fast_switch)
{
struct cpufreq_freqs freqs;
struct amd_cpudata *cpudata = policy->driver_data;
@ -465,26 +464,51 @@ static int amd_pstate_target(struct cpufreq_policy *policy,
des_perf = DIV_ROUND_CLOSEST(target_freq * cap_perf,
cpudata->max_freq);
cpufreq_freq_transition_begin(policy, &freqs);
WARN_ON(fast_switch && !policy->fast_switch_enabled);
/*
* If fast_switch is desired, then there aren't any registered
* transition notifiers. See comment for
* cpufreq_enable_fast_switch().
*/
if (!fast_switch)
cpufreq_freq_transition_begin(policy, &freqs);
amd_pstate_update(cpudata, min_perf, des_perf,
max_perf, false, policy->governor->flags);
cpufreq_freq_transition_end(policy, &freqs, false);
max_perf, fast_switch, policy->governor->flags);
if (!fast_switch)
cpufreq_freq_transition_end(policy, &freqs, false);
return 0;
}
static int amd_pstate_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
return amd_pstate_update_freq(policy, target_freq, false);
}
static unsigned int amd_pstate_fast_switch(struct cpufreq_policy *policy,
unsigned int target_freq)
{
return amd_pstate_update_freq(policy, target_freq, true);
}
static void amd_pstate_adjust_perf(unsigned int cpu,
unsigned long _min_perf,
unsigned long target_perf,
unsigned long capacity)
{
unsigned long max_perf, min_perf, des_perf,
cap_perf, lowest_nonlinear_perf;
cap_perf, lowest_nonlinear_perf, max_freq;
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
struct amd_cpudata *cpudata = policy->driver_data;
unsigned int target_freq;
cap_perf = READ_ONCE(cpudata->highest_perf);
lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
max_freq = READ_ONCE(cpudata->max_freq);
des_perf = cap_perf;
if (target_perf < capacity)
@ -501,6 +525,10 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
if (max_perf < min_perf)
max_perf = min_perf;
des_perf = clamp_t(unsigned long, des_perf, min_perf, max_perf);
target_freq = div_u64(des_perf * max_freq, max_perf);
policy->cur = target_freq;
amd_pstate_update(cpudata, min_perf, des_perf, max_perf, true,
policy->governor->flags);
cpufreq_cpu_put(policy);
@ -715,6 +743,7 @@ static int amd_pstate_cpu_exit(struct cpufreq_policy *policy)
freq_qos_remove_request(&cpudata->req[1]);
freq_qos_remove_request(&cpudata->req[0]);
policy->fast_switch_possible = false;
kfree(cpudata);
return 0;
@ -1079,7 +1108,6 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
policy->policy = CPUFREQ_POLICY_POWERSAVE;
if (boot_cpu_has(X86_FEATURE_CPPC)) {
policy->fast_switch_possible = true;
ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value);
if (ret)
return ret;
@ -1102,7 +1130,6 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
{
pr_debug("CPU %d exiting\n", policy->cpu);
policy->fast_switch_possible = false;
return 0;
}
@ -1309,6 +1336,7 @@ static struct cpufreq_driver amd_pstate_driver = {
.flags = CPUFREQ_CONST_LOOPS | CPUFREQ_NEED_UPDATE_LIMITS,
.verify = amd_pstate_verify,
.target = amd_pstate_target,
.fast_switch = amd_pstate_fast_switch,
.init = amd_pstate_cpu_init,
.exit = amd_pstate_cpu_exit,
.suspend = amd_pstate_cpu_suspend,

View File

@ -1028,7 +1028,7 @@ static int cxl_mem_get_partition_info(struct cxl_dev_state *cxlds)
* cxl_dev_state_identify() - Send the IDENTIFY command to the device.
* @cxlds: The device data for the operation
*
* Return: 0 if identify was executed successfully.
* Return: 0 if identify was executed successfully or media not ready.
*
* This will dispatch the identify command to the device and on success populate
* structures to be exported to sysfs.
@ -1041,6 +1041,9 @@ int cxl_dev_state_identify(struct cxl_dev_state *cxlds)
u32 val;
int rc;
if (!cxlds->media_ready)
return 0;
mbox_cmd = (struct cxl_mbox_cmd) {
.opcode = CXL_MBOX_OP_IDENTIFY,
.size_out = sizeof(id),
@ -1102,6 +1105,13 @@ int cxl_mem_create_range_info(struct cxl_dev_state *cxlds)
struct device *dev = cxlds->dev;
int rc;
if (!cxlds->media_ready) {
cxlds->dpa_res = DEFINE_RES_MEM(0, 0);
cxlds->ram_res = DEFINE_RES_MEM(0, 0);
cxlds->pmem_res = DEFINE_RES_MEM(0, 0);
return 0;
}
cxlds->dpa_res =
(struct resource)DEFINE_RES_MEM(0, cxlds->total_bytes);

View File

@ -101,23 +101,57 @@ int devm_cxl_port_enumerate_dports(struct cxl_port *port)
}
EXPORT_SYMBOL_NS_GPL(devm_cxl_port_enumerate_dports, CXL);
/*
* Wait up to @media_ready_timeout for the device to report memory
* active.
*/
int cxl_await_media_ready(struct cxl_dev_state *cxlds)
static int cxl_dvsec_mem_range_valid(struct cxl_dev_state *cxlds, int id)
{
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
int d = cxlds->cxl_dvsec;
bool valid = false;
int rc, i;
u32 temp;
if (id > CXL_DVSEC_RANGE_MAX)
return -EINVAL;
/* Check MEM INFO VALID bit first, give up after 1s */
i = 1;
do {
rc = pci_read_config_dword(pdev,
d + CXL_DVSEC_RANGE_SIZE_LOW(id),
&temp);
if (rc)
return rc;
valid = FIELD_GET(CXL_DVSEC_MEM_INFO_VALID, temp);
if (valid)
break;
msleep(1000);
} while (i--);
if (!valid) {
dev_err(&pdev->dev,
"Timeout awaiting memory range %d valid after 1s.\n",
id);
return -ETIMEDOUT;
}
return 0;
}
static int cxl_dvsec_mem_range_active(struct cxl_dev_state *cxlds, int id)
{
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
int d = cxlds->cxl_dvsec;
bool active = false;
u64 md_status;
int rc, i;
u32 temp;
if (id > CXL_DVSEC_RANGE_MAX)
return -EINVAL;
/* Check MEM ACTIVE bit, up to 60s timeout by default */
for (i = media_ready_timeout; i; i--) {
u32 temp;
rc = pci_read_config_dword(
pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(0), &temp);
pdev, d + CXL_DVSEC_RANGE_SIZE_LOW(id), &temp);
if (rc)
return rc;
@ -134,6 +168,39 @@ int cxl_await_media_ready(struct cxl_dev_state *cxlds)
return -ETIMEDOUT;
}
return 0;
}
/*
* Wait up to @media_ready_timeout for the device to report memory
* active.
*/
int cxl_await_media_ready(struct cxl_dev_state *cxlds)
{
struct pci_dev *pdev = to_pci_dev(cxlds->dev);
int d = cxlds->cxl_dvsec;
int rc, i, hdm_count;
u64 md_status;
u16 cap;
rc = pci_read_config_word(pdev,
d + CXL_DVSEC_CAP_OFFSET, &cap);
if (rc)
return rc;
hdm_count = FIELD_GET(CXL_DVSEC_HDM_COUNT_MASK, cap);
for (i = 0; i < hdm_count; i++) {
rc = cxl_dvsec_mem_range_valid(cxlds, i);
if (rc)
return rc;
}
for (i = 0; i < hdm_count; i++) {
rc = cxl_dvsec_mem_range_active(cxlds, i);
if (rc)
return rc;
}
md_status = readq(cxlds->regs.memdev + CXLMDEV_STATUS_OFFSET);
if (!CXLMDEV_READY(md_status))
return -EIO;
@ -241,17 +308,36 @@ static void disable_hdm(void *_cxlhdm)
hdm + CXL_HDM_DECODER_CTRL_OFFSET);
}
static int devm_cxl_enable_hdm(struct device *host, struct cxl_hdm *cxlhdm)
int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm)
{
void __iomem *hdm = cxlhdm->regs.hdm_decoder;
void __iomem *hdm;
u32 global_ctrl;
/*
* If the hdm capability was not mapped there is nothing to enable and
* the caller is responsible for what happens next. For example,
* emulate a passthrough decoder.
*/
if (IS_ERR(cxlhdm))
return 0;
hdm = cxlhdm->regs.hdm_decoder;
global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET);
/*
* If the HDM decoder capability was enabled on entry, skip
* registering disable_hdm() since this decode capability may be
* owned by platform firmware.
*/
if (global_ctrl & CXL_HDM_DECODER_ENABLE)
return 0;
writel(global_ctrl | CXL_HDM_DECODER_ENABLE,
hdm + CXL_HDM_DECODER_CTRL_OFFSET);
return devm_add_action_or_reset(host, disable_hdm, cxlhdm);
return devm_add_action_or_reset(&port->dev, disable_hdm, cxlhdm);
}
EXPORT_SYMBOL_NS_GPL(devm_cxl_enable_hdm, CXL);
int cxl_dvsec_rr_decode(struct device *dev, int d,
struct cxl_endpoint_dvsec_info *info)
@ -425,7 +511,7 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm,
if (info->mem_enabled)
return 0;
rc = devm_cxl_enable_hdm(&port->dev, cxlhdm);
rc = devm_cxl_enable_hdm(port, cxlhdm);
if (rc)
return rc;

View File

@ -750,11 +750,10 @@ struct cxl_port *devm_cxl_add_port(struct device *host, struct device *uport,
parent_port = parent_dport ? parent_dport->port : NULL;
if (IS_ERR(port)) {
dev_dbg(uport, "Failed to add %s%s%s%s: %ld\n",
dev_name(&port->dev),
parent_port ? " to " : "",
dev_dbg(uport, "Failed to add%s%s%s: %ld\n",
parent_port ? " port to " : "",
parent_port ? dev_name(&parent_port->dev) : "",
parent_port ? "" : " (root port)",
parent_port ? "" : " root port",
PTR_ERR(port));
} else {
dev_dbg(uport, "%s added%s%s%s\n",

View File

@ -710,6 +710,7 @@ struct cxl_endpoint_dvsec_info {
struct cxl_hdm;
struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port,
struct cxl_endpoint_dvsec_info *info);
int devm_cxl_enable_hdm(struct cxl_port *port, struct cxl_hdm *cxlhdm);
int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm,
struct cxl_endpoint_dvsec_info *info);
int devm_cxl_add_passthrough_decoder(struct cxl_port *port);

View File

@ -266,6 +266,7 @@ struct cxl_poison_state {
* @regs: Parsed register blocks
* @cxl_dvsec: Offset to the PCIe device DVSEC
* @rcd: operating in RCD mode (CXL 3.0 9.11.8 CXL Devices Attached to an RCH)
* @media_ready: Indicate whether the device media is usable
* @payload_size: Size of space for payload
* (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register)
* @lsa_size: Size of Label Storage Area
@ -303,6 +304,7 @@ struct cxl_dev_state {
int cxl_dvsec;
bool rcd;
bool media_ready;
size_t payload_size;
size_t lsa_size;
struct mutex mbox_mutex; /* Protects device mailbox and firmware */

View File

@ -31,6 +31,8 @@
#define CXL_DVSEC_RANGE_BASE_LOW(i) (0x24 + (i * 0x10))
#define CXL_DVSEC_MEM_BASE_LOW_MASK GENMASK(31, 28)
#define CXL_DVSEC_RANGE_MAX 2
/* CXL 2.0 8.1.4: Non-CXL Function Map DVSEC */
#define CXL_DVSEC_FUNCTION_MAP 2

View File

@ -124,6 +124,9 @@ static int cxl_mem_probe(struct device *dev)
struct dentry *dentry;
int rc;
if (!cxlds->media_ready)
return -EBUSY;
/*
* Someone is trying to reattach this device after it lost its port
* connection (an endpoint port previously registered by this memdev was

View File

@ -708,6 +708,12 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (rc)
dev_dbg(&pdev->dev, "Failed to map RAS capability.\n");
rc = cxl_await_media_ready(cxlds);
if (rc == 0)
cxlds->media_ready = true;
else
dev_warn(&pdev->dev, "Media not active (%d)\n", rc);
rc = cxl_pci_setup_mailbox(cxlds);
if (rc)
return rc;

View File

@ -60,13 +60,17 @@ static int discover_region(struct device *dev, void *root)
static int cxl_switch_port_probe(struct cxl_port *port)
{
struct cxl_hdm *cxlhdm;
int rc;
int rc, nr_dports;
rc = devm_cxl_port_enumerate_dports(port);
if (rc < 0)
return rc;
nr_dports = devm_cxl_port_enumerate_dports(port);
if (nr_dports < 0)
return nr_dports;
cxlhdm = devm_cxl_setup_hdm(port, NULL);
rc = devm_cxl_enable_hdm(port, cxlhdm);
if (rc)
return rc;
if (!IS_ERR(cxlhdm))
return devm_cxl_enumerate_decoders(cxlhdm, NULL);
@ -75,7 +79,7 @@ static int cxl_switch_port_probe(struct cxl_port *port)
return PTR_ERR(cxlhdm);
}
if (rc == 1) {
if (nr_dports == 1) {
dev_dbg(&port->dev, "Fallback to passthrough decoder\n");
return devm_cxl_add_passthrough_decoder(port);
}
@ -113,12 +117,6 @@ static int cxl_endpoint_port_probe(struct cxl_port *port)
if (rc)
return rc;
rc = cxl_await_media_ready(cxlds);
if (rc) {
dev_err(&port->dev, "Media not active (%d)\n", rc);
return rc;
}
rc = devm_cxl_enumerate_decoders(cxlhdm, &info);
if (rc)
return rc;

View File

@ -132,7 +132,7 @@
#define ATC_DST_PIP BIT(12) /* Destination Picture-in-Picture enabled */
#define ATC_SRC_DSCR_DIS BIT(16) /* Src Descriptor fetch disable */
#define ATC_DST_DSCR_DIS BIT(20) /* Dst Descriptor fetch disable */
#define ATC_FC GENMASK(22, 21) /* Choose Flow Controller */
#define ATC_FC GENMASK(23, 21) /* Choose Flow Controller */
#define ATC_FC_MEM2MEM 0x0 /* Mem-to-Mem (DMA) */
#define ATC_FC_MEM2PER 0x1 /* Mem-to-Periph (DMA) */
#define ATC_FC_PER2MEM 0x2 /* Periph-to-Mem (DMA) */
@ -153,8 +153,6 @@
#define ATC_AUTO BIT(31) /* Auto multiple buffer tx enable */
/* Bitfields in CFG */
#define ATC_PER_MSB(h) ((0x30U & (h)) >> 4) /* Extract most significant bits of a handshaking identifier */
#define ATC_SRC_PER GENMASK(3, 0) /* Channel src rq associated with periph handshaking ifc h */
#define ATC_DST_PER GENMASK(7, 4) /* Channel dst rq associated with periph handshaking ifc h */
#define ATC_SRC_REP BIT(8) /* Source Replay Mod */
@ -181,10 +179,15 @@
#define ATC_DPIP_HOLE GENMASK(15, 0)
#define ATC_DPIP_BOUNDARY GENMASK(25, 16)
#define ATC_SRC_PER_ID(id) (FIELD_PREP(ATC_SRC_PER_MSB, (id)) | \
FIELD_PREP(ATC_SRC_PER, (id)))
#define ATC_DST_PER_ID(id) (FIELD_PREP(ATC_DST_PER_MSB, (id)) | \
FIELD_PREP(ATC_DST_PER, (id)))
#define ATC_PER_MSB GENMASK(5, 4) /* Extract MSBs of a handshaking identifier */
#define ATC_SRC_PER_ID(id) \
({ typeof(id) _id = (id); \
FIELD_PREP(ATC_SRC_PER_MSB, FIELD_GET(ATC_PER_MSB, _id)) | \
FIELD_PREP(ATC_SRC_PER, _id); })
#define ATC_DST_PER_ID(id) \
({ typeof(id) _id = (id); \
FIELD_PREP(ATC_DST_PER_MSB, FIELD_GET(ATC_PER_MSB, _id)) | \
FIELD_PREP(ATC_DST_PER, _id); })

View File

@ -1102,6 +1102,8 @@ at_xdmac_prep_interleaved(struct dma_chan *chan,
NULL,
src_addr, dst_addr,
xt, xt->sgl);
if (!first)
return NULL;
/* Length of the block is (BLEN+1) microblocks. */
for (i = 0; i < xt->numf - 1; i++)
@ -1132,8 +1134,9 @@ at_xdmac_prep_interleaved(struct dma_chan *chan,
src_addr, dst_addr,
xt, chunk);
if (!desc) {
list_splice_tail_init(&first->descs_list,
&atchan->free_descs_list);
if (first)
list_splice_tail_init(&first->descs_list,
&atchan->free_descs_list);
return NULL;
}

View File

@ -277,7 +277,6 @@ static int idxd_cdev_open(struct inode *inode, struct file *filp)
if (wq_dedicated(wq)) {
rc = idxd_wq_set_pasid(wq, pasid);
if (rc < 0) {
iommu_sva_unbind_device(sva);
dev_err(dev, "wq set pasid failed: %d\n", rc);
goto failed_set_pasid;
}

View File

@ -1050,7 +1050,7 @@ static bool _trigger(struct pl330_thread *thrd)
return true;
}
static bool _start(struct pl330_thread *thrd)
static bool pl330_start_thread(struct pl330_thread *thrd)
{
switch (_state(thrd)) {
case PL330_STATE_FAULT_COMPLETING:
@ -1702,7 +1702,7 @@ static int pl330_update(struct pl330_dmac *pl330)
thrd->req_running = -1;
/* Get going again ASAP */
_start(thrd);
pl330_start_thread(thrd);
/* For now, just make a list of callbacks to be done */
list_add_tail(&descdone->rqd, &pl330->req_done);
@ -2089,7 +2089,7 @@ static void pl330_tasklet(struct tasklet_struct *t)
} else {
/* Make sure the PL330 Channel thread is active */
spin_lock(&pch->thread->dmac->lock);
_start(pch->thread);
pl330_start_thread(pch->thread);
spin_unlock(&pch->thread->dmac->lock);
}
@ -2107,7 +2107,7 @@ static void pl330_tasklet(struct tasklet_struct *t)
if (power_down) {
pch->active = true;
spin_lock(&pch->thread->dmac->lock);
_start(pch->thread);
pl330_start_thread(pch->thread);
spin_unlock(&pch->thread->dmac->lock);
power_down = false;
}

View File

@ -5527,7 +5527,7 @@ static int udma_probe(struct platform_device *pdev)
return ret;
}
static int udma_pm_suspend(struct device *dev)
static int __maybe_unused udma_pm_suspend(struct device *dev)
{
struct udma_dev *ud = dev_get_drvdata(dev);
struct dma_device *dma_dev = &ud->ddev;
@ -5549,7 +5549,7 @@ static int udma_pm_suspend(struct device *dev)
return 0;
}
static int udma_pm_resume(struct device *dev)
static int __maybe_unused udma_pm_resume(struct device *dev)
{
struct udma_dev *ud = dev_get_drvdata(dev);
struct dma_device *dma_dev = &ud->ddev;

View File

@ -15,6 +15,8 @@
#include "common.h"
static DEFINE_IDA(ffa_bus_id);
static int ffa_device_match(struct device *dev, struct device_driver *drv)
{
const struct ffa_device_id *id_table;
@ -53,7 +55,8 @@ static void ffa_device_remove(struct device *dev)
{
struct ffa_driver *ffa_drv = to_ffa_driver(dev->driver);
ffa_drv->remove(to_ffa_dev(dev));
if (ffa_drv->remove)
ffa_drv->remove(to_ffa_dev(dev));
}
static int ffa_device_uevent(const struct device *dev, struct kobj_uevent_env *env)
@ -130,6 +133,7 @@ static void ffa_release_device(struct device *dev)
{
struct ffa_device *ffa_dev = to_ffa_dev(dev);
ida_free(&ffa_bus_id, ffa_dev->id);
kfree(ffa_dev);
}
@ -170,18 +174,24 @@ bool ffa_device_is_valid(struct ffa_device *ffa_dev)
struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id,
const struct ffa_ops *ops)
{
int ret;
int id, ret;
struct device *dev;
struct ffa_device *ffa_dev;
ffa_dev = kzalloc(sizeof(*ffa_dev), GFP_KERNEL);
if (!ffa_dev)
id = ida_alloc_min(&ffa_bus_id, 1, GFP_KERNEL);
if (id < 0)
return NULL;
ffa_dev = kzalloc(sizeof(*ffa_dev), GFP_KERNEL);
if (!ffa_dev) {
ida_free(&ffa_bus_id, id);
return NULL;
}
dev = &ffa_dev->dev;
dev->bus = &ffa_bus_type;
dev->release = ffa_release_device;
dev_set_name(&ffa_dev->dev, "arm-ffa-%04x", vm_id);
dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id);
ffa_dev->vm_id = vm_id;
ffa_dev->ops = ops;
@ -217,4 +227,5 @@ void arm_ffa_bus_exit(void)
{
ffa_devices_unregister();
bus_unregister(&ffa_bus_type);
ida_destroy(&ffa_bus_id);
}

View File

@ -193,7 +193,8 @@ __ffa_partition_info_get(u32 uuid0, u32 uuid1, u32 uuid2, u32 uuid3,
int idx, count, flags = 0, sz, buf_sz;
ffa_value_t partition_info;
if (!buffer || !num_partitions) /* Just get the count for now */
if (drv_info->version > FFA_VERSION_1_0 &&
(!buffer || !num_partitions)) /* Just get the count for now */
flags = PARTITION_INFO_GET_RETURN_COUNT_ONLY;
mutex_lock(&drv_info->rx_lock);
@ -420,12 +421,17 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize,
ep_mem_access->receiver = args->attrs[idx].receiver;
ep_mem_access->attrs = args->attrs[idx].attrs;
ep_mem_access->composite_off = COMPOSITE_OFFSET(args->nattrs);
ep_mem_access->flag = 0;
ep_mem_access->reserved = 0;
}
mem_region->reserved_0 = 0;
mem_region->reserved_1 = 0;
mem_region->ep_count = args->nattrs;
composite = buffer + COMPOSITE_OFFSET(args->nattrs);
composite->total_pg_cnt = ffa_get_num_pages_sg(args->sg);
composite->addr_range_cnt = num_entries;
composite->reserved = 0;
length = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, num_entries);
frag_len = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, 0);
@ -460,6 +466,7 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize,
constituents->address = sg_phys(args->sg);
constituents->pg_cnt = args->sg->length / FFA_PAGE_SIZE;
constituents->reserved = 0;
constituents++;
frag_len += sizeof(struct ffa_mem_region_addr_range);
} while ((args->sg = sg_next(args->sg)));

View File

@ -1066,7 +1066,7 @@ static int scmi_xfer_raw_worker_init(struct scmi_raw_mode_info *raw)
raw->wait_wq = alloc_workqueue("scmi-raw-wait-wq-%d",
WQ_UNBOUND | WQ_FREEZABLE |
WQ_HIGHPRI, WQ_SYSFS, raw->id);
WQ_HIGHPRI | WQ_SYSFS, 0, raw->id);
if (!raw->wait_wq)
return -ENOMEM;

View File

@ -897,7 +897,7 @@ config GPIO_F7188X
help
This option enables support for GPIOs found on Fintek Super-I/O
chips F71869, F71869A, F71882FG, F71889F and F81866.
As well as Nuvoton Super-I/O chip NCT6116D.
As well as Nuvoton Super-I/O chip NCT6126D.
To compile this driver as a module, choose M here: the module will
be called f7188x-gpio.

View File

@ -48,7 +48,7 @@
/*
* Nuvoton devices.
*/
#define SIO_NCT6116D_ID 0xD283 /* NCT6116D chipset ID */
#define SIO_NCT6126D_ID 0xD283 /* NCT6126D chipset ID */
#define SIO_LD_GPIO_NUVOTON 0x07 /* GPIO logical device */
@ -62,7 +62,7 @@ enum chips {
f81866,
f81804,
f81865,
nct6116d,
nct6126d,
};
static const char * const f7188x_names[] = {
@ -74,7 +74,7 @@ static const char * const f7188x_names[] = {
"f81866",
"f81804",
"f81865",
"nct6116d",
"nct6126d",
};
struct f7188x_sio {
@ -187,8 +187,8 @@ static int f7188x_gpio_set_config(struct gpio_chip *chip, unsigned offset,
/* Output mode register (0:open drain 1:push-pull). */
#define f7188x_gpio_out_mode(base) ((base) + 3)
#define f7188x_gpio_dir_invert(type) ((type) == nct6116d)
#define f7188x_gpio_data_single(type) ((type) == nct6116d)
#define f7188x_gpio_dir_invert(type) ((type) == nct6126d)
#define f7188x_gpio_data_single(type) ((type) == nct6126d)
static struct f7188x_gpio_bank f71869_gpio_bank[] = {
F7188X_GPIO_BANK(0, 6, 0xF0, DRVNAME "-0"),
@ -274,7 +274,7 @@ static struct f7188x_gpio_bank f81865_gpio_bank[] = {
F7188X_GPIO_BANK(60, 5, 0x90, DRVNAME "-6"),
};
static struct f7188x_gpio_bank nct6116d_gpio_bank[] = {
static struct f7188x_gpio_bank nct6126d_gpio_bank[] = {
F7188X_GPIO_BANK(0, 8, 0xE0, DRVNAME "-0"),
F7188X_GPIO_BANK(10, 8, 0xE4, DRVNAME "-1"),
F7188X_GPIO_BANK(20, 8, 0xE8, DRVNAME "-2"),
@ -282,7 +282,7 @@ static struct f7188x_gpio_bank nct6116d_gpio_bank[] = {
F7188X_GPIO_BANK(40, 8, 0xF0, DRVNAME "-4"),
F7188X_GPIO_BANK(50, 8, 0xF4, DRVNAME "-5"),
F7188X_GPIO_BANK(60, 8, 0xF8, DRVNAME "-6"),
F7188X_GPIO_BANK(70, 1, 0xFC, DRVNAME "-7"),
F7188X_GPIO_BANK(70, 8, 0xFC, DRVNAME "-7"),
};
static int f7188x_gpio_get_direction(struct gpio_chip *chip, unsigned offset)
@ -490,9 +490,9 @@ static int f7188x_gpio_probe(struct platform_device *pdev)
data->nr_bank = ARRAY_SIZE(f81865_gpio_bank);
data->bank = f81865_gpio_bank;
break;
case nct6116d:
data->nr_bank = ARRAY_SIZE(nct6116d_gpio_bank);
data->bank = nct6116d_gpio_bank;
case nct6126d:
data->nr_bank = ARRAY_SIZE(nct6126d_gpio_bank);
data->bank = nct6126d_gpio_bank;
break;
default:
return -ENODEV;
@ -559,9 +559,9 @@ static int __init f7188x_find(int addr, struct f7188x_sio *sio)
case SIO_F81865_ID:
sio->type = f81865;
break;
case SIO_NCT6116D_ID:
case SIO_NCT6126D_ID:
sio->device = SIO_LD_GPIO_NUVOTON;
sio->type = nct6116d;
sio->type = nct6126d;
break;
default:
pr_info("Unsupported Fintek device 0x%04x\n", devid);
@ -569,7 +569,7 @@ static int __init f7188x_find(int addr, struct f7188x_sio *sio)
}
/* double check manufacturer where possible */
if (sio->type != nct6116d) {
if (sio->type != nct6126d) {
manid = superio_inw(addr, SIO_FINTEK_MANID);
if (manid != SIO_FINTEK_ID) {
pr_debug("Not a Fintek device at 0x%08x\n", addr);
@ -581,7 +581,7 @@ static int __init f7188x_find(int addr, struct f7188x_sio *sio)
err = 0;
pr_info("Found %s at %#x\n", f7188x_names[sio->type], (unsigned int)addr);
if (sio->type != nct6116d)
if (sio->type != nct6126d)
pr_info(" revision %d\n", superio_inb(addr, SIO_FINTEK_DEVREV));
err:

View File

@ -369,7 +369,7 @@ static void gpio_mockup_debugfs_setup(struct device *dev,
priv->offset = i;
priv->desc = gpiochip_get_desc(gc, i);
debugfs_create_file(name, 0200, chip->dbg_dir, priv,
debugfs_create_file(name, 0600, chip->dbg_dir, priv,
&gpio_mockup_debugfs_ops);
}
}

View File

@ -209,6 +209,8 @@ static int gpiochip_find_base(int ngpio)
break;
/* nope, check the space right after the chip */
base = gdev->base + gdev->ngpio;
if (base < GPIO_DYNAMIC_BASE)
base = GPIO_DYNAMIC_BASE;
}
if (gpio_is_valid(base)) {

View File

@ -6892,8 +6892,10 @@ static int gfx_v10_0_kiq_resume(struct amdgpu_device *adev)
return r;
r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
if (unlikely(r != 0))
if (unlikely(r != 0)) {
amdgpu_bo_unreserve(ring->mqd_obj);
return r;
}
gfx_v10_0_kiq_init_queue(ring);
amdgpu_bo_kunmap(ring->mqd_obj);

View File

@ -3617,8 +3617,10 @@ static int gfx_v9_0_kiq_resume(struct amdgpu_device *adev)
return r;
r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
if (unlikely(r != 0))
if (unlikely(r != 0)) {
amdgpu_bo_unreserve(ring->mqd_obj);
return r;
}
gfx_v9_0_kiq_init_queue(ring);
amdgpu_bo_kunmap(ring->mqd_obj);

View File

@ -57,7 +57,13 @@ static int psp_v10_0_init_microcode(struct psp_context *psp)
if (err)
return err;
return psp_init_ta_microcode(psp, ucode_prefix);
err = psp_init_ta_microcode(psp, ucode_prefix);
if ((adev->ip_versions[GC_HWIP][0] == IP_VERSION(9, 1, 0)) &&
(adev->pdev->revision == 0xa1) &&
(psp->securedisplay_context.context.bin_desc.fw_version >= 0x27000008)) {
adev->psp.securedisplay_context.context.bin_desc.size_bytes = 0;
}
return err;
}
static int psp_v10_0_ring_create(struct psp_context *psp,

View File

@ -2479,20 +2479,25 @@ static void dm_gpureset_toggle_interrupts(struct amdgpu_device *adev,
if (acrtc && state->stream_status[i].plane_count != 0) {
irq_source = IRQ_TYPE_PFLIP + acrtc->otg_inst;
rc = dc_interrupt_set(adev->dm.dc, irq_source, enable) ? 0 : -EBUSY;
DRM_DEBUG_VBL("crtc %d - vupdate irq %sabling: r=%d\n",
acrtc->crtc_id, enable ? "en" : "dis", rc);
if (rc)
DRM_WARN("Failed to %s pflip interrupts\n",
enable ? "enable" : "disable");
if (enable) {
rc = amdgpu_dm_crtc_enable_vblank(&acrtc->base);
if (rc)
DRM_WARN("Failed to enable vblank interrupts\n");
} else {
amdgpu_dm_crtc_disable_vblank(&acrtc->base);
}
if (amdgpu_dm_crtc_vrr_active(to_dm_crtc_state(acrtc->base.state)))
rc = amdgpu_dm_crtc_set_vupdate_irq(&acrtc->base, true);
} else
rc = amdgpu_dm_crtc_set_vupdate_irq(&acrtc->base, false);
if (rc)
DRM_WARN("Failed to %sable vupdate interrupt\n", enable ? "en" : "dis");
irq_source = IRQ_TYPE_VBLANK + acrtc->otg_inst;
/* During gpu-reset we disable and then enable vblank irq, so
* don't use amdgpu_irq_get/put() to avoid refcount change.
*/
if (!dc_interrupt_set(adev->dm.dc, irq_source, enable))
DRM_WARN("Failed to %sable vblank interrupt\n", enable ? "en" : "dis");
}
}
@ -2852,7 +2857,7 @@ static int dm_resume(void *handle)
* this is the case when traversing through already created
* MST connectors, should be skipped
*/
if (aconnector->dc_link->type == dc_connection_mst_branch)
if (aconnector && aconnector->mst_root)
continue;
mutex_lock(&aconnector->hpd_lock);
@ -6737,7 +6742,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
int clock, bpp = 0;
bool is_y420 = false;
if (!aconnector->mst_output_port || !aconnector->dc_sink)
if (!aconnector->mst_output_port)
return 0;
mst_port = aconnector->mst_output_port;

View File

@ -146,7 +146,6 @@ static void vblank_control_worker(struct work_struct *work)
static inline int dm_set_vblank(struct drm_crtc *crtc, bool enable)
{
enum dc_irq_source irq_source;
struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
struct amdgpu_device *adev = drm_to_adev(crtc->dev);
struct dm_crtc_state *acrtc_state = to_dm_crtc_state(crtc->state);
@ -169,18 +168,9 @@ static inline int dm_set_vblank(struct drm_crtc *crtc, bool enable)
if (rc)
return rc;
if (amdgpu_in_reset(adev)) {
irq_source = IRQ_TYPE_VBLANK + acrtc->otg_inst;
/* During gpu-reset we disable and then enable vblank irq, so
* don't use amdgpu_irq_get/put() to avoid refcount change.
*/
if (!dc_interrupt_set(adev->dm.dc, irq_source, enable))
rc = -EBUSY;
} else {
rc = (enable)
? amdgpu_irq_get(adev, &adev->crtc_irq, acrtc->crtc_id)
: amdgpu_irq_put(adev, &adev->crtc_irq, acrtc->crtc_id);
}
rc = (enable)
? amdgpu_irq_get(adev, &adev->crtc_irq, acrtc->crtc_id)
: amdgpu_irq_put(adev, &adev->crtc_irq, acrtc->crtc_id);
if (rc)
return rc;

View File

@ -871,13 +871,11 @@ static ssize_t amdgpu_get_pp_od_clk_voltage(struct device *dev,
}
if (ret == -ENOENT) {
size = amdgpu_dpm_print_clock_levels(adev, OD_SCLK, buf);
if (size > 0) {
size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf + size);
size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf + size);
size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf + size);
size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf + size);
size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf + size);
}
size += amdgpu_dpm_print_clock_levels(adev, OD_MCLK, buf + size);
size += amdgpu_dpm_print_clock_levels(adev, OD_VDDC_CURVE, buf + size);
size += amdgpu_dpm_print_clock_levels(adev, OD_VDDGFX_OFFSET, buf + size);
size += amdgpu_dpm_print_clock_levels(adev, OD_RANGE, buf + size);
size += amdgpu_dpm_print_clock_levels(adev, OD_CCLK, buf + size);
}
if (size == 0)

View File

@ -125,6 +125,7 @@ static struct cmn2asic_msg_mapping smu_v13_0_7_message_map[SMU_MSG_MAX_COUNT] =
MSG_MAP(ArmD3, PPSMC_MSG_ArmD3, 0),
MSG_MAP(AllowGpo, PPSMC_MSG_SetGpoAllow, 0),
MSG_MAP(GetPptLimit, PPSMC_MSG_GetPptLimit, 0),
MSG_MAP(NotifyPowerSource, PPSMC_MSG_NotifyPowerSource, 0),
};
static struct cmn2asic_mapping smu_v13_0_7_clk_map[SMU_CLK_COUNT] = {

View File

@ -264,28 +264,10 @@ void drmm_kfree(struct drm_device *dev, void *data)
}
EXPORT_SYMBOL(drmm_kfree);
static void drmm_mutex_release(struct drm_device *dev, void *res)
void __drmm_mutex_release(struct drm_device *dev, void *res)
{
struct mutex *lock = res;
mutex_destroy(lock);
}
/**
* drmm_mutex_init - &drm_device-managed mutex_init()
* @dev: DRM device
* @lock: lock to be initialized
*
* Returns:
* 0 on success, or a negative errno code otherwise.
*
* This is a &drm_device-managed version of mutex_init(). The initialized
* lock is automatically destroyed on the final drm_dev_put().
*/
int drmm_mutex_init(struct drm_device *dev, struct mutex *lock)
{
mutex_init(lock);
return drmm_add_action_or_reset(dev, drmm_mutex_release, lock);
}
EXPORT_SYMBOL(drmm_mutex_init);
EXPORT_SYMBOL(__drmm_mutex_release);

Some files were not shown because too many files have changed in this diff Show More