Merge remote-tracking branch 'torvalds/master' into perf/core

To pick up fixes.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This commit is contained in:
Arnaldo Carvalho de Melo 2022-01-03 11:54:30 -03:00
commit 65f8d08cf8
258 changed files with 2039 additions and 1138 deletions

View File

@ -1689,6 +1689,8 @@
architectures force reset to be always executed
i8042.unlock [HW] Unlock (ignore) the keylock
i8042.kbdreset [HW] Reset device connected to KBD port
i8042.probe_defer
[HW] Allow deferred probing upon i8042 probe errors
i810= [HW,DRM]

View File

@ -11,9 +11,11 @@ systems. Some systems use variants that don't meet branding requirements,
and so are not advertised as being I2C but come under different names,
e.g. TWI (Two Wire Interface), IIC.
The official I2C specification is the `"I2C-bus specification and user
manual" (UM10204) <https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_
published by NXP Semiconductors.
The latest official I2C specification is the `"I2C-bus specification and user
manual" (UM10204) <https://www.nxp.com/webapp/Download?colCode=UM10204>`_
published by NXP Semiconductors. However, you need to log-in to the site to
access the PDF. An older version of the specification (revision 6) is archived
`here <https://web.archive.org/web/20210813122132/https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_.
SMBus (System Management Bus) is based on the I2C protocol, and is mostly
a subset of I2C protocols and signaling. Many I2C devices will work on an

View File

@ -196,11 +196,12 @@ ad_actor_sys_prio
ad_actor_system
In an AD system, this specifies the mac-address for the actor in
protocol packet exchanges (LACPDUs). The value cannot be NULL or
multicast. It is preferred to have the local-admin bit set for this
mac but driver does not enforce it. If the value is not given then
system defaults to using the masters' mac address as actors' system
address.
protocol packet exchanges (LACPDUs). The value cannot be a multicast
address. If the all-zeroes MAC is specified, bonding will internally
use the MAC of the bond itself. It is preferred to have the
local-admin bit set for this mac but driver does not enforce it. If
the value is not given then system defaults to using the masters'
mac address as actors' system address.
This parameter has effect only in 802.3ad mode and is available through
SysFs interface.

View File

@ -183,6 +183,7 @@ PHY and allows physical transmission and reception of Ethernet frames.
IRQ config, enable, reset
DPNI (Datapath Network Interface)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Contains TX/RX queues, network interface configuration, and RX buffer pool
configuration mechanisms. The TX/RX queues are in memory and are identified
by queue number.

View File

@ -25,7 +25,8 @@ ip_default_ttl - INTEGER
ip_no_pmtu_disc - INTEGER
Disable Path MTU Discovery. If enabled in mode 1 and a
fragmentation-required ICMP is received, the PMTU to this
destination will be set to min_pmtu (see below). You will need
destination will be set to the smallest of the old MTU to
this destination and min_pmtu (see below). You will need
to raise min_pmtu to the smallest interface MTU on your system
manually if you want to avoid locally generated fragments.
@ -49,7 +50,8 @@ ip_no_pmtu_disc - INTEGER
Default: FALSE
min_pmtu - INTEGER
default 552 - minimum discovered Path MTU
default 552 - minimum Path MTU. Unless this is changed mannually,
each cached pmtu will never be lower than this setting.
ip_forward_use_pmtu - BOOLEAN
By default we don't trust protocol path MTUs while forwarding

View File

@ -582,8 +582,8 @@ Time stamps for outgoing packets are to be generated as follows:
and hardware timestamping is not possible (SKBTX_IN_PROGRESS not set).
- As soon as the driver has sent the packet and/or obtained a
hardware time stamp for it, it passes the time stamp back by
calling skb_hwtstamp_tx() with the original skb, the raw
hardware time stamp. skb_hwtstamp_tx() clones the original skb and
calling skb_tstamp_tx() with the original skb, the raw
hardware time stamp. skb_tstamp_tx() clones the original skb and
adds the timestamps, therefore the original skb has to be freed now.
If obtaining the hardware time stamp somehow fails, then the driver
should not fall back to software time stamping. The rationale is that

View File

@ -326,6 +326,8 @@ usi-headset
Headset support on USI machines
dual-codecs
Lenovo laptops with dual codecs
alc285-hp-amp-init
HP laptops which require speaker amplifier initialization (ALC285)
ALC680
======

View File

@ -14845,7 +14845,7 @@ PCIE DRIVER FOR MEDIATEK
M: Ryder Lee <ryder.lee@mediatek.com>
M: Jianjun Wang <jianjun.wang@mediatek.com>
L: linux-pci@vger.kernel.org
L: linux-mediatek@lists.infradead.org
L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)
S: Supported
F: Documentation/devicetree/bindings/pci/mediatek*
F: drivers/pci/controller/*mediatek*
@ -17423,7 +17423,7 @@ F: drivers/video/fbdev/sm712*
SILVACO I3C DUAL-ROLE MASTER
M: Miquel Raynal <miquel.raynal@bootlin.com>
M: Conor Culhane <conor.culhane@silvaco.com>
L: linux-i3c@lists.infradead.org
L: linux-i3c@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: Documentation/devicetree/bindings/i3c/silvaco,i3c-master.yaml
F: drivers/i3c/master/svc-i3c-master.c

View File

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 16
SUBLEVEL = 0
EXTRAVERSION = -rc6
EXTRAVERSION = -rc8
NAME = Gobble Gobble
# *DOCUMENTATION*

View File

@ -309,6 +309,7 @@ mdio {
ethphy: ethernet-phy@1 {
reg = <1>;
qca,clk-out-frequency = <125000000>;
};
};
};

View File

@ -17,7 +17,6 @@
#ifdef CONFIG_EFI
void efi_init(void);
extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md);
int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);

View File

@ -596,11 +596,9 @@ call_fpe:
tstne r0, #0x04000000 @ bit 26 set on both ARM and Thumb-2
reteq lr
and r8, r0, #0x00000f00 @ mask out CP number
THUMB( lsr r8, r8, #8 )
mov r7, #1
add r6, r10, #TI_USED_CP
ARM( strb r7, [r6, r8, lsr #8] ) @ set appropriate used_cp[]
THUMB( strb r7, [r6, r8] ) @ set appropriate used_cp[]
add r6, r10, r8, lsr #8 @ add used_cp[] array offset first
strb r7, [r6, #TI_USED_CP] @ set appropriate used_cp[]
#ifdef CONFIG_IWMMXT
@ Test if we need to give access to iWMMXt coprocessors
ldr r5, [r10, #TI_FLAGS]
@ -609,7 +607,7 @@ call_fpe:
bcs iwmmxt_task_enable
#endif
ARM( add pc, pc, r8, lsr #6 )
THUMB( lsl r8, r8, #2 )
THUMB( lsr r8, r8, #6 )
THUMB( add pc, r8 )
nop

View File

@ -114,6 +114,7 @@ ENTRY(secondary_startup)
add r12, r12, r10
ret r12
1: bl __after_proc_init
ldr r7, __secondary_data @ reload r7
ldr sp, [r7, #12] @ set up the stack pointer
ldr r0, [r7, #16] @ set up task pointer
mov fp, #0

View File

@ -69,7 +69,7 @@ &emac {
pinctrl-0 = <&emac_rgmii_pins>;
phy-supply = <&reg_gmac_3v3>;
phy-handle = <&ext_rgmii_phy>;
phy-mode = "rgmii";
phy-mode = "rgmii-id";
status = "okay";
};

View File

@ -719,7 +719,7 @@ i2c0: i2c@2000000 {
clock-names = "i2c";
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(16)>;
scl-gpio = <&gpio2 15 GPIO_ACTIVE_HIGH>;
scl-gpios = <&gpio2 15 GPIO_ACTIVE_HIGH>;
status = "disabled";
};
@ -768,7 +768,7 @@ i2c4: i2c@2040000 {
clock-names = "i2c";
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(16)>;
scl-gpio = <&gpio2 16 GPIO_ACTIVE_HIGH>;
scl-gpios = <&gpio2 16 GPIO_ACTIVE_HIGH>;
status = "disabled";
};

View File

@ -14,7 +14,6 @@
#ifdef CONFIG_EFI
extern void efi_init(void);
extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
#else
#define efi_init()
#endif

View File

@ -85,11 +85,6 @@ config MMU
config STACK_GROWSUP
def_bool y
config ARCH_DEFCONFIG
string
default "arch/parisc/configs/generic-32bit_defconfig" if !64BIT
default "arch/parisc/configs/generic-64bit_defconfig" if 64BIT
config GENERIC_LOCKBREAK
bool
default y

View File

@ -14,7 +14,7 @@ static inline void
_futex_spin_lock(u32 __user *uaddr)
{
extern u32 lws_lock_start[];
long index = ((long)uaddr & 0x3f8) >> 1;
long index = ((long)uaddr & 0x7f8) >> 1;
arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index];
preempt_disable();
arch_spin_lock(s);
@ -24,7 +24,7 @@ static inline void
_futex_spin_unlock(u32 __user *uaddr)
{
extern u32 lws_lock_start[];
long index = ((long)uaddr & 0x3f8) >> 1;
long index = ((long)uaddr & 0x7f8) >> 1;
arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index];
arch_spin_unlock(s);
preempt_enable();

View File

@ -472,7 +472,7 @@ lws_start:
extrd,u %r1,PSW_W_BIT,1,%r1
/* sp must be aligned on 4, so deposit the W bit setting into
* the bottom of sp temporarily */
or,ev %r1,%r30,%r30
or,od %r1,%r30,%r30
/* Clip LWS number to a 32-bit value for 32-bit processes */
depdi 0, 31, 32, %r20

View File

@ -730,6 +730,8 @@ void notrace handle_interruption(int code, struct pt_regs *regs)
}
mmap_read_unlock(current->mm);
}
/* CPU could not fetch instruction, so clear stale IIR value. */
regs->iir = 0xbaadf00d;
fallthrough;
case 27:
/* Data memory protection ID trap */

View File

@ -183,7 +183,7 @@ static void note_prot_wx(struct pg_state *st, unsigned long addr)
{
pte_t pte = __pte(st->current_flags);
if (!IS_ENABLED(CONFIG_PPC_DEBUG_WX) || !st->check_wx)
if (!IS_ENABLED(CONFIG_DEBUG_WX) || !st->check_wx)
return;
if (!pte_write(pte) || !pte_exec(pte))

View File

@ -13,7 +13,6 @@
#ifdef CONFIG_EFI
extern void efi_init(void);
extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
#else
#define efi_init()
#endif

View File

@ -197,8 +197,6 @@ static inline bool efi_runtime_supported(void)
extern void parse_efi_setup(u64 phys_addr, u32 data_len);
extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
extern void efi_thunk_runtime_setup(void);
efi_status_t efi_set_virtual_address_map(unsigned long memory_map_size,
unsigned long descriptor_size,

View File

@ -4,8 +4,8 @@
#include <asm/cpufeature.h>
#define PKRU_AD_BIT 0x1
#define PKRU_WD_BIT 0x2
#define PKRU_AD_BIT 0x1u
#define PKRU_WD_BIT 0x2u
#define PKRU_BITS_PER_PKEY 2
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS

View File

@ -713,9 +713,6 @@ static void __init early_reserve_memory(void)
early_reserve_initrd();
if (efi_enabled(EFI_BOOT))
efi_memblock_x86_reserve_range();
memblock_x86_reserve_range_setup_data();
reserve_ibft_region();
@ -742,28 +739,6 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
return 0;
}
static char * __init prepare_command_line(void)
{
#ifdef CONFIG_CMDLINE_BOOL
#ifdef CONFIG_CMDLINE_OVERRIDE
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
#else
if (builtin_cmdline[0]) {
/* append boot loader cmdline to builtin */
strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
}
#endif
#endif
strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
parse_early_param();
return command_line;
}
/*
* Determine if we were loaded by an EFI loader. If so, then we have also been
* passed the efi memmap, systab, etc., so we should use these data structures
@ -852,23 +827,6 @@ void __init setup_arch(char **cmdline_p)
x86_init.oem.arch_setup();
/*
* x86_configure_nx() is called before parse_early_param() (called by
* prepare_command_line()) to detect whether hardware doesn't support
* NX (so that the early EHCI debug console setup can safely call
* set_fixmap()). It may then be called again from within noexec_setup()
* during parsing early parameters to honor the respective command line
* option.
*/
x86_configure_nx();
/*
* This parses early params and it needs to run before
* early_reserve_memory() because latter relies on such settings
* supplied as early params.
*/
*cmdline_p = prepare_command_line();
/*
* Do some memory reservations *before* memory is added to memblock, so
* memblock allocations won't overwrite it.
@ -902,6 +860,36 @@ void __init setup_arch(char **cmdline_p)
bss_resource.start = __pa_symbol(__bss_start);
bss_resource.end = __pa_symbol(__bss_stop)-1;
#ifdef CONFIG_CMDLINE_BOOL
#ifdef CONFIG_CMDLINE_OVERRIDE
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
#else
if (builtin_cmdline[0]) {
/* append boot loader cmdline to builtin */
strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);
strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);
strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
}
#endif
#endif
strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
*cmdline_p = command_line;
/*
* x86_configure_nx() is called before parse_early_param() to detect
* whether hardware doesn't support NX (so that the early EHCI debug
* console setup can safely call set_fixmap()). It may then be called
* again from within noexec_setup() during parsing early parameters
* to honor the respective command line option.
*/
x86_configure_nx();
parse_early_param();
if (efi_enabled(EFI_BOOT))
efi_memblock_x86_reserve_range();
#ifdef CONFIG_MEMORY_HOTPLUG
/*
* Memory used by the kernel cannot be hot-removed because Linux

View File

@ -68,7 +68,7 @@ static const char * const sym_regex_kernel[S_NSYMTYPES] = {
"(__parainstructions|__alt_instructions)(_end)?|"
"(__iommu_table|__apicdrivers|__smp_locks)(_end)?|"
"__(start|end)_pci_.*|"
#if CONFIG_FW_LOADER_BUILTIN
#if CONFIG_FW_LOADER
"__(start|end)_builtin_fw|"
#endif
"__(start|stop)___ksymtab(_gpl)?|"

View File

@ -671,7 +671,7 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
BUG_ON(buffer->user_data > alloc->buffer + alloc->buffer_size);
if (buffer->async_transaction) {
alloc->free_async_space += size + sizeof(struct binder_buffer);
alloc->free_async_space += buffer_size + sizeof(struct binder_buffer);
binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC_ASYNC,
"%d: binder_free_buf size %zd async free %zd\n",

View File

@ -37,7 +37,7 @@ struct charlcd_priv {
bool must_clear;
/* contains the LCD config state */
unsigned long int flags;
unsigned long flags;
/* Current escape sequence and it's length or -1 if outside */
struct {
@ -578,6 +578,9 @@ static int charlcd_init(struct charlcd *lcd)
* Since charlcd_init_display() needs to write data, we have to
* enable mark the LCD initialized just before.
*/
if (WARN_ON(!lcd->ops->init_display))
return -EINVAL;
ret = lcd->ops->init_display(lcd);
if (ret)
return ret;

View File

@ -687,11 +687,11 @@ static int sunxi_rsb_hw_init(struct sunxi_rsb *rsb)
static void sunxi_rsb_hw_exit(struct sunxi_rsb *rsb)
{
/* Keep the clock and PM reference counts consistent. */
if (pm_runtime_status_suspended(rsb->dev))
pm_runtime_resume(rsb->dev);
reset_control_assert(rsb->rstc);
clk_disable_unprepare(rsb->clk);
/* Keep the clock and PM reference counts consistent. */
if (!pm_runtime_status_suspended(rsb->dev))
clk_disable_unprepare(rsb->clk);
}
static int __maybe_unused sunxi_rsb_runtime_suspend(struct device *dev)

View File

@ -3031,7 +3031,7 @@ cleanup_bmc_device(struct kref *ref)
* with removing the device attributes while reading a device
* attribute.
*/
schedule_work(&bmc->remove_work);
queue_work(remove_work_wq, &bmc->remove_work);
}
/*
@ -5392,22 +5392,27 @@ static int ipmi_init_msghandler(void)
if (initialized)
goto out;
init_srcu_struct(&ipmi_interfaces_srcu);
rv = init_srcu_struct(&ipmi_interfaces_srcu);
if (rv)
goto out;
remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");
if (!remove_work_wq) {
pr_err("unable to create ipmi-msghandler-remove-wq workqueue");
rv = -ENOMEM;
goto out_wq;
}
timer_setup(&ipmi_timer, ipmi_timeout, 0);
mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES);
atomic_notifier_chain_register(&panic_notifier_list, &panic_block);
remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");
if (!remove_work_wq) {
pr_err("unable to create ipmi-msghandler-remove-wq workqueue");
rv = -ENOMEM;
goto out;
}
initialized = true;
out_wq:
if (rv)
cleanup_srcu_struct(&ipmi_interfaces_srcu);
out:
mutex_unlock(&ipmi_interfaces_mutex);
return rv;

View File

@ -1659,6 +1659,9 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
}
}
ssif_info->client = client;
i2c_set_clientdata(client, ssif_info);
rv = ssif_check_and_remove(client, ssif_info);
/* If rv is 0 and addr source is not SI_ACPI, continue probing */
if (!rv && ssif_info->addr_source == SI_ACPI) {
@ -1679,9 +1682,6 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
ipmi_addr_src_to_str(ssif_info->addr_source),
client->addr, client->adapter->name, slave_addr);
ssif_info->client = client;
i2c_set_clientdata(client, ssif_info);
/* Now check for system interface capabilities */
msg[0] = IPMI_NETFN_APP_REQUEST << 2;
msg[1] = IPMI_GET_SYSTEM_INTERFACE_CAPABILITIES_CMD;
@ -1881,6 +1881,7 @@ static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id)
dev_err(&ssif_info->client->dev,
"Unable to start IPMI SSIF: %d\n", rv);
i2c_set_clientdata(client, NULL);
kfree(ssif_info);
}
kfree(resp);

View File

@ -211,6 +211,12 @@ static u32 uof_get_ae_mask(u32 obj_num)
return adf_4xxx_fw_config[obj_num].ae_mask;
}
static u32 get_vf2pf_sources(void __iomem *pmisc_addr)
{
/* For the moment do not report vf2pf sources */
return 0;
}
void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data)
{
hw_data->dev_class = &adf_4xxx_class;
@ -254,6 +260,7 @@ void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data)
hw_data->set_msix_rttable = set_msix_default_rttable;
hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer;
hw_data->enable_pfvf_comms = pfvf_comms_disabled;
hw_data->get_vf2pf_sources = get_vf2pf_sources;
hw_data->disable_iov = adf_disable_sriov;
hw_data->min_iov_compat_ver = ADF_PFVF_COMPAT_THIS_VERSION;

View File

@ -46,6 +46,7 @@
struct dln2_gpio {
struct platform_device *pdev;
struct gpio_chip gpio;
struct irq_chip irqchip;
/*
* Cache pin direction to save us one transfer, since the hardware has
@ -383,15 +384,6 @@ static void dln2_irq_bus_unlock(struct irq_data *irqd)
mutex_unlock(&dln2->irq_lock);
}
static struct irq_chip dln2_gpio_irqchip = {
.name = "dln2-irq",
.irq_mask = dln2_irq_mask,
.irq_unmask = dln2_irq_unmask,
.irq_set_type = dln2_irq_set_type,
.irq_bus_lock = dln2_irq_bus_lock,
.irq_bus_sync_unlock = dln2_irq_bus_unlock,
};
static void dln2_gpio_event(struct platform_device *pdev, u16 echo,
const void *data, int len)
{
@ -473,8 +465,15 @@ static int dln2_gpio_probe(struct platform_device *pdev)
dln2->gpio.direction_output = dln2_gpio_direction_output;
dln2->gpio.set_config = dln2_gpio_set_config;
dln2->irqchip.name = "dln2-irq",
dln2->irqchip.irq_mask = dln2_irq_mask,
dln2->irqchip.irq_unmask = dln2_irq_unmask,
dln2->irqchip.irq_set_type = dln2_irq_set_type,
dln2->irqchip.irq_bus_lock = dln2_irq_bus_lock,
dln2->irqchip.irq_bus_sync_unlock = dln2_irq_bus_unlock,
girq = &dln2->gpio.irq;
girq->chip = &dln2_gpio_irqchip;
girq->chip = &dln2->irqchip;
/* The event comes from the outside so no parent handler */
girq->parent_handler = NULL;
girq->num_parents = 0;

View File

@ -100,11 +100,7 @@ static int _virtio_gpio_req(struct virtio_gpio *vgpio, u16 type, u16 gpio,
virtqueue_kick(vgpio->request_vq);
mutex_unlock(&vgpio->lock);
if (!wait_for_completion_timeout(&line->completion, HZ)) {
dev_err(dev, "GPIO operation timed out\n");
ret = -ETIMEDOUT;
goto out;
}
wait_for_completion(&line->completion);
if (unlikely(res->status != VIRTIO_GPIO_STATUS_OK)) {
dev_err(dev, "GPIO request failed: %d\n", gpio);

View File

@ -3166,6 +3166,12 @@ static void amdgpu_device_detect_sriov_bios(struct amdgpu_device *adev)
bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type)
{
switch (asic_type) {
#ifdef CONFIG_DRM_AMDGPU_SI
case CHIP_HAINAN:
#endif
case CHIP_TOPAZ:
/* chips with no display hardware */
return false;
#if defined(CONFIG_DRM_AMD_DC)
case CHIP_TAHITI:
case CHIP_PITCAIRN:
@ -4461,7 +4467,7 @@ int amdgpu_device_mode1_reset(struct amdgpu_device *adev)
int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
struct amdgpu_reset_context *reset_context)
{
int i, j, r = 0;
int i, r = 0;
struct amdgpu_job *job = NULL;
bool need_full_reset =
test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags);
@ -4483,15 +4489,8 @@ int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
/*clear job fence from fence drv to avoid force_completion
*leave NULL and vm flush fence in fence drv */
for (j = 0; j <= ring->fence_drv.num_fences_mask; j++) {
struct dma_fence *old, **ptr;
amdgpu_fence_driver_clear_job_fences(ring);
ptr = &ring->fence_drv.fences[j];
old = rcu_dereference_protected(*ptr, 1);
if (old && test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &old->flags)) {
RCU_INIT_POINTER(*ptr, NULL);
}
}
/* after all hw jobs are reset, hw fence is meaningless, so force_completion */
amdgpu_fence_driver_force_completion(ring);
}

View File

@ -526,10 +526,15 @@ void amdgpu_discovery_harvest_ip(struct amdgpu_device *adev)
}
}
union gc_info {
struct gc_info_v1_0 v1;
struct gc_info_v2_0 v2;
};
int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
{
struct binary_header *bhdr;
struct gc_info_v1_0 *gc_info;
union gc_info *gc_info;
if (!adev->mman.discovery_bin) {
DRM_ERROR("ip discovery uninitialized\n");
@ -537,28 +542,55 @@ int amdgpu_discovery_get_gfx_info(struct amdgpu_device *adev)
}
bhdr = (struct binary_header *)adev->mman.discovery_bin;
gc_info = (struct gc_info_v1_0 *)(adev->mman.discovery_bin +
gc_info = (union gc_info *)(adev->mman.discovery_bin +
le16_to_cpu(bhdr->table_list[GC].offset));
adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->gc_num_se);
adev->gfx.config.max_cu_per_sh = 2 * (le32_to_cpu(gc_info->gc_num_wgp0_per_sa) +
le32_to_cpu(gc_info->gc_num_wgp1_per_sa));
adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->gc_num_sa_per_se);
adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->gc_num_rb_per_se);
adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->gc_num_gl2c);
adev->gfx.config.max_gprs = le32_to_cpu(gc_info->gc_num_gprs);
adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->gc_num_max_gs_thds);
adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->gc_gs_table_depth);
adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->gc_gsprim_buff_depth);
adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->gc_double_offchip_lds_buffer);
adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->gc_wave_size);
adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->gc_max_waves_per_simd);
adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->gc_max_scratch_slots_per_cu);
adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->gc_lds_size);
adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->gc_num_sc_per_se) /
le32_to_cpu(gc_info->gc_num_sa_per_se);
adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->gc_num_packer_per_sc);
switch (gc_info->v1.header.version_major) {
case 1:
adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->v1.gc_num_se);
adev->gfx.config.max_cu_per_sh = 2 * (le32_to_cpu(gc_info->v1.gc_num_wgp0_per_sa) +
le32_to_cpu(gc_info->v1.gc_num_wgp1_per_sa));
adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->v1.gc_num_sa_per_se);
adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->v1.gc_num_rb_per_se);
adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->v1.gc_num_gl2c);
adev->gfx.config.max_gprs = le32_to_cpu(gc_info->v1.gc_num_gprs);
adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->v1.gc_num_max_gs_thds);
adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->v1.gc_gs_table_depth);
adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->v1.gc_gsprim_buff_depth);
adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->v1.gc_double_offchip_lds_buffer);
adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->v1.gc_wave_size);
adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->v1.gc_max_waves_per_simd);
adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->v1.gc_max_scratch_slots_per_cu);
adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->v1.gc_lds_size);
adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v1.gc_num_sc_per_se) /
le32_to_cpu(gc_info->v1.gc_num_sa_per_se);
adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v1.gc_num_packer_per_sc);
break;
case 2:
adev->gfx.config.max_shader_engines = le32_to_cpu(gc_info->v2.gc_num_se);
adev->gfx.config.max_cu_per_sh = le32_to_cpu(gc_info->v2.gc_num_cu_per_sh);
adev->gfx.config.max_sh_per_se = le32_to_cpu(gc_info->v2.gc_num_sh_per_se);
adev->gfx.config.max_backends_per_se = le32_to_cpu(gc_info->v2.gc_num_rb_per_se);
adev->gfx.config.max_texture_channel_caches = le32_to_cpu(gc_info->v2.gc_num_tccs);
adev->gfx.config.max_gprs = le32_to_cpu(gc_info->v2.gc_num_gprs);
adev->gfx.config.max_gs_threads = le32_to_cpu(gc_info->v2.gc_num_max_gs_thds);
adev->gfx.config.gs_vgt_table_depth = le32_to_cpu(gc_info->v2.gc_gs_table_depth);
adev->gfx.config.gs_prim_buffer_depth = le32_to_cpu(gc_info->v2.gc_gsprim_buff_depth);
adev->gfx.config.double_offchip_lds_buf = le32_to_cpu(gc_info->v2.gc_double_offchip_lds_buffer);
adev->gfx.cu_info.wave_front_size = le32_to_cpu(gc_info->v2.gc_wave_size);
adev->gfx.cu_info.max_waves_per_simd = le32_to_cpu(gc_info->v2.gc_max_waves_per_simd);
adev->gfx.cu_info.max_scratch_slots_per_cu = le32_to_cpu(gc_info->v2.gc_max_scratch_slots_per_cu);
adev->gfx.cu_info.lds_size = le32_to_cpu(gc_info->v2.gc_lds_size);
adev->gfx.config.num_sc_per_sh = le32_to_cpu(gc_info->v2.gc_num_sc_per_se) /
le32_to_cpu(gc_info->v2.gc_num_sh_per_se);
adev->gfx.config.num_packer_per_sc = le32_to_cpu(gc_info->v2.gc_num_packer_per_sc);
break;
default:
dev_err(adev->dev,
"Unhandled GC info table %d.%d\n",
gc_info->v1.header.version_major,
gc_info->v1.header.version_minor);
return -EINVAL;
}
return 0;
}

View File

@ -384,7 +384,7 @@ amdgpu_dma_buf_move_notify(struct dma_buf_attachment *attach)
struct amdgpu_vm_bo_base *bo_base;
int r;
if (bo->tbo.resource->mem_type == TTM_PL_SYSTEM)
if (!bo->tbo.resource || bo->tbo.resource->mem_type == TTM_PL_SYSTEM)
return;
r = ttm_bo_validate(&bo->tbo, &placement, &ctx);

View File

@ -328,10 +328,11 @@ module_param_named(aspm, amdgpu_aspm, int, 0444);
/**
* DOC: runpm (int)
* Override for runtime power management control for dGPUs in PX/HG laptops. The amdgpu driver can dynamically power down
* the dGPU on PX/HG laptops when it is idle. The default is -1 (auto enable). Setting the value to 0 disables this functionality.
* Override for runtime power management control for dGPUs. The amdgpu driver can dynamically power down
* the dGPUs when they are idle if supported. The default is -1 (auto enable).
* Setting the value to 0 disables this functionality.
*/
MODULE_PARM_DESC(runpm, "PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = PX only default)");
MODULE_PARM_DESC(runpm, "PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = auto)");
module_param_named(runpm, amdgpu_runtime_pm, int, 0444);
/**
@ -2153,7 +2154,10 @@ static int amdgpu_pmops_suspend(struct device *dev)
adev->in_s3 = true;
r = amdgpu_device_suspend(drm_dev, true);
adev->in_s3 = false;
if (r)
return r;
if (!adev->in_s0ix)
r = amdgpu_asic_reset(adev);
return r;
}
@ -2234,12 +2238,27 @@ static int amdgpu_pmops_runtime_suspend(struct device *dev)
if (amdgpu_device_supports_px(drm_dev))
drm_dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;
/*
* By setting mp1_state as PP_MP1_STATE_UNLOAD, MP1 will do some
* proper cleanups and put itself into a state ready for PNP. That
* can address some random resuming failure observed on BOCO capable
* platforms.
* TODO: this may be also needed for PX capable platform.
*/
if (amdgpu_device_supports_boco(drm_dev))
adev->mp1_state = PP_MP1_STATE_UNLOAD;
ret = amdgpu_device_suspend(drm_dev, false);
if (ret) {
adev->in_runpm = false;
if (amdgpu_device_supports_boco(drm_dev))
adev->mp1_state = PP_MP1_STATE_NONE;
return ret;
}
if (amdgpu_device_supports_boco(drm_dev))
adev->mp1_state = PP_MP1_STATE_NONE;
if (amdgpu_device_supports_px(drm_dev)) {
/* Only need to handle PCI state in the driver for ATPX
* PCI core handles it for _PR3.

View File

@ -77,11 +77,13 @@ void amdgpu_fence_slab_fini(void)
* Cast helper
*/
static const struct dma_fence_ops amdgpu_fence_ops;
static const struct dma_fence_ops amdgpu_job_fence_ops;
static inline struct amdgpu_fence *to_amdgpu_fence(struct dma_fence *f)
{
struct amdgpu_fence *__f = container_of(f, struct amdgpu_fence, base);
if (__f->base.ops == &amdgpu_fence_ops)
if (__f->base.ops == &amdgpu_fence_ops ||
__f->base.ops == &amdgpu_job_fence_ops)
return __f;
return NULL;
@ -158,19 +160,18 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
}
seq = ++ring->fence_drv.sync_seq;
if (job != NULL && job->job_run_counter) {
if (job && job->job_run_counter) {
/* reinit seq for resubmitted jobs */
fence->seqno = seq;
} else {
dma_fence_init(fence, &amdgpu_fence_ops,
&ring->fence_drv.lock,
adev->fence_context + ring->idx,
seq);
}
if (job != NULL) {
/* mark this fence has a parent job */
set_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &fence->flags);
if (job)
dma_fence_init(fence, &amdgpu_job_fence_ops,
&ring->fence_drv.lock,
adev->fence_context + ring->idx, seq);
else
dma_fence_init(fence, &amdgpu_fence_ops,
&ring->fence_drv.lock,
adev->fence_context + ring->idx, seq);
}
amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,
@ -620,6 +621,25 @@ void amdgpu_fence_driver_hw_init(struct amdgpu_device *adev)
}
}
/**
* amdgpu_fence_driver_clear_job_fences - clear job embedded fences of ring
*
* @ring: fence of the ring to be cleared
*
*/
void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)
{
int i;
struct dma_fence *old, **ptr;
for (i = 0; i <= ring->fence_drv.num_fences_mask; i++) {
ptr = &ring->fence_drv.fences[i];
old = rcu_dereference_protected(*ptr, 1);
if (old && old->ops == &amdgpu_job_fence_ops)
RCU_INIT_POINTER(*ptr, NULL);
}
}
/**
* amdgpu_fence_driver_force_completion - force signal latest fence of ring
*
@ -643,16 +663,14 @@ static const char *amdgpu_fence_get_driver_name(struct dma_fence *fence)
static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)
{
struct amdgpu_ring *ring;
return (const char *)to_amdgpu_fence(f)->ring->name;
}
if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) {
struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)
{
struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
ring = to_amdgpu_ring(job->base.sched);
} else {
ring = to_amdgpu_fence(f)->ring;
}
return (const char *)ring->name;
return (const char *)to_amdgpu_ring(job->base.sched)->name;
}
/**
@ -665,18 +683,25 @@ static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)
*/
static bool amdgpu_fence_enable_signaling(struct dma_fence *f)
{
struct amdgpu_ring *ring;
if (!timer_pending(&to_amdgpu_fence(f)->ring->fence_drv.fallback_timer))
amdgpu_fence_schedule_fallback(to_amdgpu_fence(f)->ring);
if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) {
struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
return true;
}
ring = to_amdgpu_ring(job->base.sched);
} else {
ring = to_amdgpu_fence(f)->ring;
}
/**
* amdgpu_job_fence_enable_signaling - enable signalling on job fence
* @f: fence
*
* This is the simliar function with amdgpu_fence_enable_signaling above, it
* only handles the job embedded fence.
*/
static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)
{
struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
if (!timer_pending(&ring->fence_drv.fallback_timer))
amdgpu_fence_schedule_fallback(ring);
if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))
amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));
return true;
}
@ -692,19 +717,23 @@ static void amdgpu_fence_free(struct rcu_head *rcu)
{
struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) {
/* free job if fence has a parent job */
struct amdgpu_job *job;
job = container_of(f, struct amdgpu_job, hw_fence);
kfree(job);
} else {
/* free fence_slab if it's separated fence*/
struct amdgpu_fence *fence;
kmem_cache_free(amdgpu_fence_slab, to_amdgpu_fence(f));
}
fence = to_amdgpu_fence(f);
kmem_cache_free(amdgpu_fence_slab, fence);
}
/**
* amdgpu_job_fence_free - free up the job with embedded fence
*
* @rcu: RCU callback head
*
* Free up the job with embedded fence after the RCU grace period.
*/
static void amdgpu_job_fence_free(struct rcu_head *rcu)
{
struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
/* free job if fence has a parent job */
kfree(container_of(f, struct amdgpu_job, hw_fence));
}
/**
@ -720,6 +749,19 @@ static void amdgpu_fence_release(struct dma_fence *f)
call_rcu(&f->rcu, amdgpu_fence_free);
}
/**
* amdgpu_job_fence_release - callback that job embedded fence can be freed
*
* @f: fence
*
* This is the simliar function with amdgpu_fence_release above, it
* only handles the job embedded fence.
*/
static void amdgpu_job_fence_release(struct dma_fence *f)
{
call_rcu(&f->rcu, amdgpu_job_fence_free);
}
static const struct dma_fence_ops amdgpu_fence_ops = {
.get_driver_name = amdgpu_fence_get_driver_name,
.get_timeline_name = amdgpu_fence_get_timeline_name,
@ -727,6 +769,12 @@ static const struct dma_fence_ops amdgpu_fence_ops = {
.release = amdgpu_fence_release,
};
static const struct dma_fence_ops amdgpu_job_fence_ops = {
.get_driver_name = amdgpu_fence_get_driver_name,
.get_timeline_name = amdgpu_job_fence_get_timeline_name,
.enable_signaling = amdgpu_job_fence_enable_signaling,
.release = amdgpu_job_fence_release,
};
/*
* Fence debugfs

View File

@ -53,9 +53,6 @@ enum amdgpu_ring_priority_level {
#define AMDGPU_FENCE_FLAG_INT (1 << 1)
#define AMDGPU_FENCE_FLAG_TC_WB_ONLY (1 << 2)
/* fence flag bit to indicate the face is embedded in job*/
#define AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT (DMA_FENCE_FLAG_USER_BITS + 1)
#define to_amdgpu_ring(s) container_of((s), struct amdgpu_ring, sched)
#define AMDGPU_IB_POOL_SIZE (1024 * 1024)
@ -114,6 +111,7 @@ struct amdgpu_fence_driver {
struct dma_fence **fences;
};
void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);
void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring);
int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,

View File

@ -246,6 +246,13 @@ static int vcn_v1_0_suspend(void *handle)
{
int r;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
bool idle_work_unexecuted;
idle_work_unexecuted = cancel_delayed_work_sync(&adev->vcn.idle_work);
if (idle_work_unexecuted) {
if (adev->pm.dpm_enabled)
amdgpu_dpm_enable_uvd(adev, false);
}
r = vcn_v1_0_hw_fini(adev);
if (r)

View File

@ -158,6 +158,7 @@ static void dcn31_update_clocks(struct clk_mgr *clk_mgr_base,
union display_idle_optimization_u idle_info = { 0 };
idle_info.idle_info.df_request_disabled = 1;
idle_info.idle_info.phy_ref_clk_off = 1;
idle_info.idle_info.s0i2_rdy = 1;
dcn31_smu_set_display_idle_optimization(clk_mgr, idle_info.data);
/* update power state */
clk_mgr_base->clks.pwr_state = DCN_PWR_STATE_LOW_POWER;

View File

@ -3945,12 +3945,9 @@ static void update_psp_stream_config(struct pipe_ctx *pipe_ctx, bool dpms_off)
config.dig_be = pipe_ctx->stream->link->link_enc_hw_inst;
#if defined(CONFIG_DRM_AMD_DC_DCN)
config.stream_enc_idx = pipe_ctx->stream_res.stream_enc->id - ENGINE_ID_DIGA;
if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_PHY ||
pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA) {
link_enc = pipe_ctx->stream->link->link_enc;
config.dio_output_type = pipe_ctx->stream->link->ep_type;
config.dio_output_idx = link_enc->transmitter - TRANSMITTER_UNIPHY_A;
if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_PHY)
link_enc = pipe_ctx->stream->link->link_enc;
else if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA)

View File

@ -78,6 +78,7 @@ static const struct hw_sequencer_funcs dcn10_funcs = {
.get_clock = dcn10_get_clock,
.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
.calc_vupdate_position = dcn10_calc_vupdate_position,
.power_down = dce110_power_down,
.set_backlight_level = dce110_set_backlight_level,
.set_abm_immediate_disable = dce110_set_abm_immediate_disable,
.set_pipe = dce110_set_pipe,

View File

@ -1069,7 +1069,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.timing_trace = false,
.clock_trace = true,
.disable_pplib_clock_request = true,
.pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
.pipe_split_policy = MPC_SPLIT_DYNAMIC,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,

View File

@ -603,7 +603,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.timing_trace = false,
.clock_trace = true,
.disable_pplib_clock_request = true,
.pipe_split_policy = MPC_SPLIT_AVOID,
.pipe_split_policy = MPC_SPLIT_DYNAMIC,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,

View File

@ -874,7 +874,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.clock_trace = true,
.disable_pplib_clock_request = true,
.min_disp_clk_khz = 100000,
.pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
.pipe_split_policy = MPC_SPLIT_DYNAMIC,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,

View File

@ -840,7 +840,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.timing_trace = false,
.clock_trace = true,
.disable_pplib_clock_request = true,
.pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
.pipe_split_policy = MPC_SPLIT_DYNAMIC,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,

View File

@ -686,7 +686,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.disable_clock_gate = true,
.disable_pplib_clock_request = true,
.disable_pplib_wm_range = true,
.pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
.pipe_split_policy = MPC_SPLIT_DYNAMIC,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,

View File

@ -211,7 +211,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.timing_trace = false,
.clock_trace = true,
.disable_pplib_clock_request = true,
.pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
.pipe_split_policy = MPC_SPLIT_DYNAMIC,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,

View File

@ -193,7 +193,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.timing_trace = false,
.clock_trace = true,
.disable_pplib_clock_request = true,
.pipe_split_policy = MPC_SPLIT_AVOID_MULT_DISP,
.pipe_split_policy = MPC_SPLIT_DYNAMIC,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,

View File

@ -101,6 +101,7 @@ static const struct hw_sequencer_funcs dcn31_funcs = {
.z10_restore = dcn31_z10_restore,
.z10_save_init = dcn31_z10_save_init,
.set_disp_pattern_generator = dcn30_set_disp_pattern_generator,
.optimize_pwr_state = dcn21_optimize_pwr_state,
.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
.update_visual_confirm_color = dcn20_update_visual_confirm_color,
};

View File

@ -355,6 +355,14 @@ static const struct dce110_clk_src_regs clk_src_regs[] = {
clk_src_regs(3, D),
clk_src_regs(4, E)
};
/*pll_id being rempped in dmub, in driver it is logical instance*/
static const struct dce110_clk_src_regs clk_src_regs_b0[] = {
clk_src_regs(0, A),
clk_src_regs(1, B),
clk_src_regs(2, F),
clk_src_regs(3, G),
clk_src_regs(4, E)
};
static const struct dce110_clk_src_shift cs_shift = {
CS_COMMON_MASK_SH_LIST_DCN2_0(__SHIFT)
@ -994,7 +1002,7 @@ static const struct dc_debug_options debug_defaults_drv = {
.timing_trace = false,
.clock_trace = true,
.disable_pplib_clock_request = false,
.pipe_split_policy = MPC_SPLIT_AVOID,
.pipe_split_policy = MPC_SPLIT_DYNAMIC,
.force_single_disp_pipe_split = false,
.disable_dcc = DCC_ENABLE,
.vsr_support = true,
@ -2276,14 +2284,27 @@ static bool dcn31_resource_construct(
dcn30_clock_source_create(ctx, ctx->dc_bios,
CLOCK_SOURCE_COMBO_PHY_PLL1,
&clk_src_regs[1], false);
pool->base.clock_sources[DCN31_CLK_SRC_PLL2] =
/*move phypllx_pixclk_resync to dmub next*/
if (dc->ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) {
pool->base.clock_sources[DCN31_CLK_SRC_PLL2] =
dcn30_clock_source_create(ctx, ctx->dc_bios,
CLOCK_SOURCE_COMBO_PHY_PLL2,
&clk_src_regs_b0[2], false);
pool->base.clock_sources[DCN31_CLK_SRC_PLL3] =
dcn30_clock_source_create(ctx, ctx->dc_bios,
CLOCK_SOURCE_COMBO_PHY_PLL3,
&clk_src_regs_b0[3], false);
} else {
pool->base.clock_sources[DCN31_CLK_SRC_PLL2] =
dcn30_clock_source_create(ctx, ctx->dc_bios,
CLOCK_SOURCE_COMBO_PHY_PLL2,
&clk_src_regs[2], false);
pool->base.clock_sources[DCN31_CLK_SRC_PLL3] =
pool->base.clock_sources[DCN31_CLK_SRC_PLL3] =
dcn30_clock_source_create(ctx, ctx->dc_bios,
CLOCK_SOURCE_COMBO_PHY_PLL3,
&clk_src_regs[3], false);
}
pool->base.clock_sources[DCN31_CLK_SRC_PLL4] =
dcn30_clock_source_create(ctx, ctx->dc_bios,
CLOCK_SOURCE_COMBO_PHY_PLL4,

View File

@ -49,4 +49,35 @@ struct resource_pool *dcn31_create_resource_pool(
const struct dc_init_data *init_data,
struct dc *dc);
/*temp: B0 specific before switch to dcn313 headers*/
#ifndef regPHYPLLF_PIXCLK_RESYNC_CNTL
#define regPHYPLLF_PIXCLK_RESYNC_CNTL 0x007e
#define regPHYPLLF_PIXCLK_RESYNC_CNTL_BASE_IDX 1
#define regPHYPLLG_PIXCLK_RESYNC_CNTL 0x005f
#define regPHYPLLG_PIXCLK_RESYNC_CNTL_BASE_IDX 1
//PHYPLLF_PIXCLK_RESYNC_CNTL
#define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_RESYNC_ENABLE__SHIFT 0x0
#define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DEEP_COLOR_DTO_ENABLE_STATUS__SHIFT 0x1
#define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DCCG_DEEP_COLOR_CNTL__SHIFT 0x4
#define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_ENABLE__SHIFT 0x8
#define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_DOUBLE_RATE_ENABLE__SHIFT 0x9
#define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_RESYNC_ENABLE_MASK 0x00000001L
#define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DEEP_COLOR_DTO_ENABLE_STATUS_MASK 0x00000002L
#define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_DCCG_DEEP_COLOR_CNTL_MASK 0x00000030L
#define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_ENABLE_MASK 0x00000100L
#define PHYPLLF_PIXCLK_RESYNC_CNTL__PHYPLLF_PIXCLK_DOUBLE_RATE_ENABLE_MASK 0x00000200L
//PHYPLLG_PIXCLK_RESYNC_CNTL
#define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_RESYNC_ENABLE__SHIFT 0x0
#define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DEEP_COLOR_DTO_ENABLE_STATUS__SHIFT 0x1
#define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DCCG_DEEP_COLOR_CNTL__SHIFT 0x4
#define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_ENABLE__SHIFT 0x8
#define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_DOUBLE_RATE_ENABLE__SHIFT 0x9
#define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_RESYNC_ENABLE_MASK 0x00000001L
#define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DEEP_COLOR_DTO_ENABLE_STATUS_MASK 0x00000002L
#define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_DCCG_DEEP_COLOR_CNTL_MASK 0x00000030L
#define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_ENABLE_MASK 0x00000100L
#define PHYPLLG_PIXCLK_RESYNC_CNTL__PHYPLLG_PIXCLK_DOUBLE_RATE_ENABLE_MASK 0x00000200L
#endif
#endif /* _DCN31_RESOURCE_H_ */

View File

@ -143,6 +143,55 @@ struct gc_info_v1_0 {
uint32_t gc_num_gl2a;
};
struct gc_info_v1_1 {
struct gpu_info_header header;
uint32_t gc_num_se;
uint32_t gc_num_wgp0_per_sa;
uint32_t gc_num_wgp1_per_sa;
uint32_t gc_num_rb_per_se;
uint32_t gc_num_gl2c;
uint32_t gc_num_gprs;
uint32_t gc_num_max_gs_thds;
uint32_t gc_gs_table_depth;
uint32_t gc_gsprim_buff_depth;
uint32_t gc_parameter_cache_depth;
uint32_t gc_double_offchip_lds_buffer;
uint32_t gc_wave_size;
uint32_t gc_max_waves_per_simd;
uint32_t gc_max_scratch_slots_per_cu;
uint32_t gc_lds_size;
uint32_t gc_num_sc_per_se;
uint32_t gc_num_sa_per_se;
uint32_t gc_num_packer_per_sc;
uint32_t gc_num_gl2a;
uint32_t gc_num_tcp_per_sa;
uint32_t gc_num_sdp_interface;
uint32_t gc_num_tcps;
};
struct gc_info_v2_0 {
struct gpu_info_header header;
uint32_t gc_num_se;
uint32_t gc_num_cu_per_sh;
uint32_t gc_num_sh_per_se;
uint32_t gc_num_rb_per_se;
uint32_t gc_num_tccs;
uint32_t gc_num_gprs;
uint32_t gc_num_max_gs_thds;
uint32_t gc_gs_table_depth;
uint32_t gc_gsprim_buff_depth;
uint32_t gc_parameter_cache_depth;
uint32_t gc_double_offchip_lds_buffer;
uint32_t gc_wave_size;
uint32_t gc_max_waves_per_simd;
uint32_t gc_max_scratch_slots_per_cu;
uint32_t gc_lds_size;
uint32_t gc_num_sc_per_se;
uint32_t gc_num_packer_per_sc;
};
typedef struct harvest_info_header {
uint32_t signature; /* Table Signature */
uint32_t version; /* Table Version */

View File

@ -1568,9 +1568,7 @@ static int smu_suspend(void *handle)
smu->watermarks_bitmap &= ~(WATERMARKS_LOADED);
/* skip CGPG when in S0ix */
if (smu->is_apu && !adev->in_s0ix)
smu_set_gfx_cgpg(&adev->smu, false);
smu_set_gfx_cgpg(&adev->smu, false);
return 0;
}
@ -1601,8 +1599,7 @@ static int smu_resume(void *handle)
return ret;
}
if (smu->is_apu)
smu_set_gfx_cgpg(&adev->smu, true);
smu_set_gfx_cgpg(&adev->smu, true);
smu->disable_uclk_switch = 0;

View File

@ -120,7 +120,8 @@ int smu_v12_0_powergate_sdma(struct smu_context *smu, bool gate)
int smu_v12_0_set_gfx_cgpg(struct smu_context *smu, bool enable)
{
if (!(smu->adev->pg_flags & AMD_PG_SUPPORT_GFX_PG))
/* Until now the SMU12 only implemented for Renoir series so here neen't do APU check. */
if (!(smu->adev->pg_flags & AMD_PG_SUPPORT_GFX_PG) || smu->adev->in_s0ix)
return 0;
return smu_cmn_send_smc_msg_with_param(smu,

View File

@ -1621,7 +1621,7 @@ static int aldebaran_allow_xgmi_power_down(struct smu_context *smu, bool en)
{
return smu_cmn_send_smc_msg_with_param(smu,
SMU_MSG_GmiPwrDnControl,
en ? 1 : 0,
en ? 0 : 1,
NULL);
}

View File

@ -564,6 +564,7 @@ set_proto_ctx_engines_parallel_submit(struct i915_user_extension __user *base,
container_of_user(base, typeof(*ext), base);
const struct set_proto_ctx_engines *set = data;
struct drm_i915_private *i915 = set->i915;
struct i915_engine_class_instance prev_engine;
u64 flags;
int err = 0, n, i, j;
u16 slot, width, num_siblings;
@ -629,7 +630,6 @@ set_proto_ctx_engines_parallel_submit(struct i915_user_extension __user *base,
/* Create contexts / engines */
for (i = 0; i < width; ++i) {
intel_engine_mask_t current_mask = 0;
struct i915_engine_class_instance prev_engine;
for (j = 0; j < num_siblings; ++j) {
struct i915_engine_class_instance ci;

View File

@ -3017,7 +3017,7 @@ eb_composite_fence_create(struct i915_execbuffer *eb, int out_fence_fd)
fence_array = dma_fence_array_create(eb->num_batches,
fences,
eb->context->parallel.fence_context,
eb->context->parallel.seqno,
eb->context->parallel.seqno++,
false);
if (!fence_array) {
kfree(fences);

View File

@ -1662,11 +1662,11 @@ static int steal_guc_id(struct intel_guc *guc, struct intel_context *ce)
GEM_BUG_ON(intel_context_is_parent(cn));
list_del_init(&cn->guc_id.link);
ce->guc_id = cn->guc_id;
ce->guc_id.id = cn->guc_id.id;
spin_lock(&ce->guc_state.lock);
spin_lock(&cn->guc_state.lock);
clr_context_registered(cn);
spin_unlock(&ce->guc_state.lock);
spin_unlock(&cn->guc_state.lock);
set_context_guc_id_invalid(cn);

View File

@ -1224,12 +1224,14 @@ static int mtk_hdmi_bridge_mode_valid(struct drm_bridge *bridge,
return MODE_BAD;
}
if (hdmi->conf->cea_modes_only && !drm_match_cea_mode(mode))
return MODE_BAD;
if (hdmi->conf) {
if (hdmi->conf->cea_modes_only && !drm_match_cea_mode(mode))
return MODE_BAD;
if (hdmi->conf->max_mode_clock &&
mode->clock > hdmi->conf->max_mode_clock)
return MODE_CLOCK_HIGH;
if (hdmi->conf->max_mode_clock &&
mode->clock > hdmi->conf->max_mode_clock)
return MODE_CLOCK_HIGH;
}
if (mode->clock < 27000)
return MODE_CLOCK_LOW;

View File

@ -353,34 +353,16 @@ nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan, bool e
if (ret)
return ret;
fobj = NULL;
} else {
fobj = dma_resv_shared_list(resv);
}
fobj = dma_resv_shared_list(resv);
fence = dma_resv_excl_fence(resv);
if (fence) {
struct nouveau_channel *prev = NULL;
bool must_wait = true;
f = nouveau_local_fence(fence, chan->drm);
if (f) {
rcu_read_lock();
prev = rcu_dereference(f->channel);
if (prev && (prev == chan || fctx->sync(f, prev, chan) == 0))
must_wait = false;
rcu_read_unlock();
}
if (must_wait)
ret = dma_fence_wait(fence, intr);
return ret;
}
if (!exclusive || !fobj)
return ret;
for (i = 0; i < fobj->shared_count && !ret; ++i) {
/* Waiting for the exclusive fence first causes performance regressions
* under some circumstances. So manually wait for the shared ones first.
*/
for (i = 0; i < (fobj ? fobj->shared_count : 0) && !ret; ++i) {
struct nouveau_channel *prev = NULL;
bool must_wait = true;
@ -400,6 +382,26 @@ nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan, bool e
ret = dma_fence_wait(fence, intr);
}
fence = dma_resv_excl_fence(resv);
if (fence) {
struct nouveau_channel *prev = NULL;
bool must_wait = true;
f = nouveau_local_fence(fence, chan->drm);
if (f) {
rcu_read_lock();
prev = rcu_dereference(f->channel);
if (prev && (prev == chan || fctx->sync(f, prev, chan) == 0))
must_wait = false;
rcu_read_unlock();
}
if (must_wait)
ret = dma_fence_wait(fence, intr);
return ret;
}
return ret;
}

View File

@ -35,13 +35,14 @@
* explicitly as max6659, or if its address is not 0x4c.
* These chips lack the remote temperature offset feature.
*
* This driver also supports the MAX6654 chip made by Maxim. This chip can
* be at 9 different addresses, similar to MAX6680/MAX6681. The MAX6654 is
* otherwise similar to MAX6657/MAX6658/MAX6659. Extended range is available
* by setting the configuration register accordingly, and is done during
* initialization. Extended precision is only available at conversion rates
* of 1 Hz and slower. Note that extended precision is not enabled by
* default, as this driver initializes all chips to 2 Hz by design.
* This driver also supports the MAX6654 chip made by Maxim. This chip can be
* at 9 different addresses, similar to MAX6680/MAX6681. The MAX6654 is similar
* to MAX6657/MAX6658/MAX6659, but does not support critical temperature
* limits. Extended range is available by setting the configuration register
* accordingly, and is done during initialization. Extended precision is only
* available at conversion rates of 1 Hz and slower. Note that extended
* precision is not enabled by default, as this driver initializes all chips
* to 2 Hz by design.
*
* This driver also supports the MAX6646, MAX6647, MAX6648, MAX6649 and
* MAX6692 chips made by Maxim. These are again similar to the LM86,
@ -188,6 +189,8 @@ enum chips { lm90, adm1032, lm99, lm86, max6657, max6659, adt7461, max6680,
#define LM90_HAVE_BROKEN_ALERT (1 << 7) /* Broken alert */
#define LM90_HAVE_EXTENDED_TEMP (1 << 8) /* extended temperature support*/
#define LM90_PAUSE_FOR_CONFIG (1 << 9) /* Pause conversion for config */
#define LM90_HAVE_CRIT (1 << 10)/* Chip supports CRIT/OVERT register */
#define LM90_HAVE_CRIT_ALRM_SWP (1 << 11)/* critical alarm bits swapped */
/* LM90 status */
#define LM90_STATUS_LTHRM (1 << 0) /* local THERM limit tripped */
@ -197,6 +200,7 @@ enum chips { lm90, adm1032, lm99, lm86, max6657, max6659, adt7461, max6680,
#define LM90_STATUS_RHIGH (1 << 4) /* remote high temp limit tripped */
#define LM90_STATUS_LLOW (1 << 5) /* local low temp limit tripped */
#define LM90_STATUS_LHIGH (1 << 6) /* local high temp limit tripped */
#define LM90_STATUS_BUSY (1 << 7) /* conversion is ongoing */
#define MAX6696_STATUS2_R2THRM (1 << 1) /* remote2 THERM limit tripped */
#define MAX6696_STATUS2_R2OPEN (1 << 2) /* remote2 is an open circuit */
@ -354,38 +358,43 @@ struct lm90_params {
static const struct lm90_params lm90_params[] = {
[adm1032] = {
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
| LM90_HAVE_BROKEN_ALERT,
| LM90_HAVE_BROKEN_ALERT | LM90_HAVE_CRIT,
.alert_alarms = 0x7c,
.max_convrate = 10,
},
[adt7461] = {
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
| LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP,
| LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP
| LM90_HAVE_CRIT,
.alert_alarms = 0x7c,
.max_convrate = 10,
},
[g781] = {
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
| LM90_HAVE_BROKEN_ALERT,
| LM90_HAVE_BROKEN_ALERT | LM90_HAVE_CRIT,
.alert_alarms = 0x7c,
.max_convrate = 8,
},
[lm86] = {
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT,
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
| LM90_HAVE_CRIT,
.alert_alarms = 0x7b,
.max_convrate = 9,
},
[lm90] = {
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT,
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
| LM90_HAVE_CRIT,
.alert_alarms = 0x7b,
.max_convrate = 9,
},
[lm99] = {
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT,
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
| LM90_HAVE_CRIT,
.alert_alarms = 0x7b,
.max_convrate = 9,
},
[max6646] = {
.flags = LM90_HAVE_CRIT,
.alert_alarms = 0x7c,
.max_convrate = 6,
.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
@ -396,50 +405,51 @@ static const struct lm90_params lm90_params[] = {
.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
},
[max6657] = {
.flags = LM90_PAUSE_FOR_CONFIG,
.flags = LM90_PAUSE_FOR_CONFIG | LM90_HAVE_CRIT,
.alert_alarms = 0x7c,
.max_convrate = 8,
.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
},
[max6659] = {
.flags = LM90_HAVE_EMERGENCY,
.flags = LM90_HAVE_EMERGENCY | LM90_HAVE_CRIT,
.alert_alarms = 0x7c,
.max_convrate = 8,
.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
},
[max6680] = {
.flags = LM90_HAVE_OFFSET,
.flags = LM90_HAVE_OFFSET | LM90_HAVE_CRIT
| LM90_HAVE_CRIT_ALRM_SWP,
.alert_alarms = 0x7c,
.max_convrate = 7,
},
[max6696] = {
.flags = LM90_HAVE_EMERGENCY
| LM90_HAVE_EMERGENCY_ALARM | LM90_HAVE_TEMP3,
| LM90_HAVE_EMERGENCY_ALARM | LM90_HAVE_TEMP3 | LM90_HAVE_CRIT,
.alert_alarms = 0x1c7c,
.max_convrate = 6,
.reg_local_ext = MAX6657_REG_R_LOCAL_TEMPL,
},
[w83l771] = {
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT,
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT | LM90_HAVE_CRIT,
.alert_alarms = 0x7c,
.max_convrate = 8,
},
[sa56004] = {
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT,
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT | LM90_HAVE_CRIT,
.alert_alarms = 0x7b,
.max_convrate = 9,
.reg_local_ext = SA56004_REG_R_LOCAL_TEMPL,
},
[tmp451] = {
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
| LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP,
| LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP | LM90_HAVE_CRIT,
.alert_alarms = 0x7c,
.max_convrate = 9,
.reg_local_ext = TMP451_REG_R_LOCAL_TEMPL,
},
[tmp461] = {
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
| LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP,
| LM90_HAVE_BROKEN_ALERT | LM90_HAVE_EXTENDED_TEMP | LM90_HAVE_CRIT,
.alert_alarms = 0x7c,
.max_convrate = 9,
.reg_local_ext = TMP451_REG_R_LOCAL_TEMPL,
@ -668,20 +678,22 @@ static int lm90_update_limits(struct device *dev)
struct i2c_client *client = data->client;
int val;
val = lm90_read_reg(client, LM90_REG_R_LOCAL_CRIT);
if (val < 0)
return val;
data->temp8[LOCAL_CRIT] = val;
if (data->flags & LM90_HAVE_CRIT) {
val = lm90_read_reg(client, LM90_REG_R_LOCAL_CRIT);
if (val < 0)
return val;
data->temp8[LOCAL_CRIT] = val;
val = lm90_read_reg(client, LM90_REG_R_REMOTE_CRIT);
if (val < 0)
return val;
data->temp8[REMOTE_CRIT] = val;
val = lm90_read_reg(client, LM90_REG_R_REMOTE_CRIT);
if (val < 0)
return val;
data->temp8[REMOTE_CRIT] = val;
val = lm90_read_reg(client, LM90_REG_R_TCRIT_HYST);
if (val < 0)
return val;
data->temp_hyst = val;
val = lm90_read_reg(client, LM90_REG_R_TCRIT_HYST);
if (val < 0)
return val;
data->temp_hyst = val;
}
val = lm90_read_reg(client, LM90_REG_R_REMOTE_LOWH);
if (val < 0)
@ -809,7 +821,7 @@ static int lm90_update_device(struct device *dev)
val = lm90_read_reg(client, LM90_REG_R_STATUS);
if (val < 0)
return val;
data->alarms = val; /* lower 8 bit of alarms */
data->alarms = val & ~LM90_STATUS_BUSY;
if (data->kind == max6696) {
val = lm90_select_remote_channel(data, 1);
@ -1160,8 +1172,8 @@ static int lm90_set_temphyst(struct lm90_data *data, long val)
else
temp = temp_from_s8(data->temp8[LOCAL_CRIT]);
/* prevent integer underflow */
val = max(val, -128000l);
/* prevent integer overflow/underflow */
val = clamp_val(val, -128000l, 255000l);
data->temp_hyst = hyst_to_reg(temp - val);
err = i2c_smbus_write_byte_data(client, LM90_REG_W_TCRIT_HYST,
@ -1192,6 +1204,7 @@ static const u8 lm90_temp_emerg_index[3] = {
static const u8 lm90_min_alarm_bits[3] = { 5, 3, 11 };
static const u8 lm90_max_alarm_bits[3] = { 6, 4, 12 };
static const u8 lm90_crit_alarm_bits[3] = { 0, 1, 9 };
static const u8 lm90_crit_alarm_bits_swapped[3] = { 1, 0, 9 };
static const u8 lm90_emergency_alarm_bits[3] = { 15, 13, 14 };
static const u8 lm90_fault_bits[3] = { 0, 2, 10 };
@ -1217,7 +1230,10 @@ static int lm90_temp_read(struct device *dev, u32 attr, int channel, long *val)
*val = (data->alarms >> lm90_max_alarm_bits[channel]) & 1;
break;
case hwmon_temp_crit_alarm:
*val = (data->alarms >> lm90_crit_alarm_bits[channel]) & 1;
if (data->flags & LM90_HAVE_CRIT_ALRM_SWP)
*val = (data->alarms >> lm90_crit_alarm_bits_swapped[channel]) & 1;
else
*val = (data->alarms >> lm90_crit_alarm_bits[channel]) & 1;
break;
case hwmon_temp_emergency_alarm:
*val = (data->alarms >> lm90_emergency_alarm_bits[channel]) & 1;
@ -1465,12 +1481,11 @@ static int lm90_detect(struct i2c_client *client,
if (man_id < 0 || chip_id < 0 || config1 < 0 || convrate < 0)
return -ENODEV;
if (man_id == 0x01 || man_id == 0x5C || man_id == 0x41) {
if (man_id == 0x01 || man_id == 0x5C || man_id == 0xA1) {
config2 = i2c_smbus_read_byte_data(client, LM90_REG_R_CONFIG2);
if (config2 < 0)
return -ENODEV;
} else
config2 = 0; /* Make compiler happy */
}
if ((address == 0x4C || address == 0x4D)
&& man_id == 0x01) { /* National Semiconductor */
@ -1903,11 +1918,14 @@ static int lm90_probe(struct i2c_client *client)
info->config = data->channel_config;
data->channel_config[0] = HWMON_T_INPUT | HWMON_T_MIN | HWMON_T_MAX |
HWMON_T_CRIT | HWMON_T_CRIT_HYST | HWMON_T_MIN_ALARM |
HWMON_T_MAX_ALARM | HWMON_T_CRIT_ALARM;
HWMON_T_MIN_ALARM | HWMON_T_MAX_ALARM;
data->channel_config[1] = HWMON_T_INPUT | HWMON_T_MIN | HWMON_T_MAX |
HWMON_T_CRIT | HWMON_T_CRIT_HYST | HWMON_T_MIN_ALARM |
HWMON_T_MAX_ALARM | HWMON_T_CRIT_ALARM | HWMON_T_FAULT;
HWMON_T_MIN_ALARM | HWMON_T_MAX_ALARM | HWMON_T_FAULT;
if (data->flags & LM90_HAVE_CRIT) {
data->channel_config[0] |= HWMON_T_CRIT | HWMON_T_CRIT_ALARM | HWMON_T_CRIT_HYST;
data->channel_config[1] |= HWMON_T_CRIT | HWMON_T_CRIT_ALARM | HWMON_T_CRIT_HYST;
}
if (data->flags & LM90_HAVE_OFFSET)
data->channel_config[1] |= HWMON_T_OFFSET;

View File

@ -535,6 +535,9 @@ static long compat_i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned lo
sizeof(rdwr_arg)))
return -EFAULT;
if (!rdwr_arg.msgs || rdwr_arg.nmsgs == 0)
return -EINVAL;
if (rdwr_arg.nmsgs > I2C_RDWR_IOCTL_MAX_MSGS)
return -EINVAL;

View File

@ -19,6 +19,7 @@
#include <linux/module.h>
#include <linux/input.h>
#include <linux/serio.h>
#include <asm/unaligned.h>
#define DRIVER_DESC "SpaceTec SpaceBall 2003/3003/4000 FLX driver"
@ -75,9 +76,15 @@ static void spaceball_process_packet(struct spaceball* spaceball)
case 'D': /* Ball data */
if (spaceball->idx != 15) return;
for (i = 0; i < 6; i++)
/*
* Skip first three bytes; read six axes worth of data.
* Axis values are signed 16-bit big-endian.
*/
data += 3;
for (i = 0; i < ARRAY_SIZE(spaceball_axes); i++) {
input_report_abs(dev, spaceball_axes[i],
(__s16)((data[2 * i + 3] << 8) | data[2 * i + 2]));
(__s16)get_unaligned_be16(&data[i * 2]));
}
break;
case 'K': /* Button data */

View File

@ -456,9 +456,10 @@ struct iqs626_private {
unsigned int suspend_mode;
};
static int iqs626_parse_events(struct iqs626_private *iqs626,
const struct fwnode_handle *ch_node,
enum iqs626_ch_id ch_id)
static noinline_for_stack int
iqs626_parse_events(struct iqs626_private *iqs626,
const struct fwnode_handle *ch_node,
enum iqs626_ch_id ch_id)
{
struct iqs626_sys_reg *sys_reg = &iqs626->sys_reg;
struct i2c_client *client = iqs626->client;
@ -604,9 +605,10 @@ static int iqs626_parse_events(struct iqs626_private *iqs626,
return 0;
}
static int iqs626_parse_ati_target(struct iqs626_private *iqs626,
const struct fwnode_handle *ch_node,
enum iqs626_ch_id ch_id)
static noinline_for_stack int
iqs626_parse_ati_target(struct iqs626_private *iqs626,
const struct fwnode_handle *ch_node,
enum iqs626_ch_id ch_id)
{
struct iqs626_sys_reg *sys_reg = &iqs626->sys_reg;
struct i2c_client *client = iqs626->client;
@ -885,9 +887,10 @@ static int iqs626_parse_trackpad(struct iqs626_private *iqs626,
return 0;
}
static int iqs626_parse_channel(struct iqs626_private *iqs626,
const struct fwnode_handle *ch_node,
enum iqs626_ch_id ch_id)
static noinline_for_stack int
iqs626_parse_channel(struct iqs626_private *iqs626,
const struct fwnode_handle *ch_node,
enum iqs626_ch_id ch_id)
{
struct iqs626_sys_reg *sys_reg = &iqs626->sys_reg;
struct i2c_client *client = iqs626->client;

View File

@ -916,6 +916,8 @@ static int atp_probe(struct usb_interface *iface,
set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit);
set_bit(BTN_LEFT, input_dev->keybit);
INIT_WORK(&dev->work, atp_reinit);
error = input_register_device(dev->input);
if (error)
goto err_free_buffer;
@ -923,8 +925,6 @@ static int atp_probe(struct usb_interface *iface,
/* save our data pointer in this interface device */
usb_set_intfdata(iface, dev);
INIT_WORK(&dev->work, atp_reinit);
return 0;
err_free_buffer:

View File

@ -1588,7 +1588,13 @@ static const struct dmi_system_id no_hw_res_dmi_table[] = {
*/
static int elantech_change_report_id(struct psmouse *psmouse)
{
unsigned char param[2] = { 0x10, 0x03 };
/*
* NOTE: the code is expecting to receive param[] as an array of 3
* items (see __ps2_command()), even if in this case only 2 are
* actually needed. Make sure the array size is 3 to avoid potential
* stack out-of-bound accesses.
*/
unsigned char param[3] = { 0x10, 0x03 };
if (elantech_write_reg_params(psmouse, 0x7, param) ||
elantech_read_reg_params(psmouse, 0x7, param) ||

View File

@ -995,6 +995,24 @@ static const struct dmi_system_id __initconst i8042_dmi_kbdreset_table[] = {
{ }
};
static const struct dmi_system_id i8042_dmi_probe_defer_table[] __initconst = {
{
/* ASUS ZenBook UX425UA */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425UA"),
},
},
{
/* ASUS ZenBook UM325UA */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),
},
},
{ }
};
#endif /* CONFIG_X86 */
#ifdef CONFIG_PNP
@ -1315,6 +1333,9 @@ static int __init i8042_platform_init(void)
if (dmi_check_system(i8042_dmi_kbdreset_table))
i8042_kbdreset = true;
if (dmi_check_system(i8042_dmi_probe_defer_table))
i8042_probe_defer = true;
/*
* A20 was already enabled during early kernel init. But some buggy
* BIOSes (in MSI Laptops) require A20 to be enabled using 8042 to

View File

@ -45,6 +45,10 @@ static bool i8042_unlock;
module_param_named(unlock, i8042_unlock, bool, 0);
MODULE_PARM_DESC(unlock, "Ignore keyboard lock.");
static bool i8042_probe_defer;
module_param_named(probe_defer, i8042_probe_defer, bool, 0);
MODULE_PARM_DESC(probe_defer, "Allow deferred probing.");
enum i8042_controller_reset_mode {
I8042_RESET_NEVER,
I8042_RESET_ALWAYS,
@ -711,7 +715,7 @@ static int i8042_set_mux_mode(bool multiplex, unsigned char *mux_version)
* LCS/Telegraphics.
*/
static int __init i8042_check_mux(void)
static int i8042_check_mux(void)
{
unsigned char mux_version;
@ -740,10 +744,10 @@ static int __init i8042_check_mux(void)
/*
* The following is used to test AUX IRQ delivery.
*/
static struct completion i8042_aux_irq_delivered __initdata;
static bool i8042_irq_being_tested __initdata;
static struct completion i8042_aux_irq_delivered;
static bool i8042_irq_being_tested;
static irqreturn_t __init i8042_aux_test_irq(int irq, void *dev_id)
static irqreturn_t i8042_aux_test_irq(int irq, void *dev_id)
{
unsigned long flags;
unsigned char str, data;
@ -770,7 +774,7 @@ static irqreturn_t __init i8042_aux_test_irq(int irq, void *dev_id)
* verifies success by readinng CTR. Used when testing for presence of AUX
* port.
*/
static int __init i8042_toggle_aux(bool on)
static int i8042_toggle_aux(bool on)
{
unsigned char param;
int i;
@ -798,7 +802,7 @@ static int __init i8042_toggle_aux(bool on)
* the presence of an AUX interface.
*/
static int __init i8042_check_aux(void)
static int i8042_check_aux(void)
{
int retval = -1;
bool irq_registered = false;
@ -1005,7 +1009,7 @@ static int i8042_controller_init(void)
if (i8042_command(&ctr[n++ % 2], I8042_CMD_CTL_RCTR)) {
pr_err("Can't read CTR while initializing i8042\n");
return -EIO;
return i8042_probe_defer ? -EPROBE_DEFER : -EIO;
}
} while (n < 2 || ctr[0] != ctr[1]);
@ -1320,7 +1324,7 @@ static void i8042_shutdown(struct platform_device *dev)
i8042_controller_reset(false);
}
static int __init i8042_create_kbd_port(void)
static int i8042_create_kbd_port(void)
{
struct serio *serio;
struct i8042_port *port = &i8042_ports[I8042_KBD_PORT_NO];
@ -1349,7 +1353,7 @@ static int __init i8042_create_kbd_port(void)
return 0;
}
static int __init i8042_create_aux_port(int idx)
static int i8042_create_aux_port(int idx)
{
struct serio *serio;
int port_no = idx < 0 ? I8042_AUX_PORT_NO : I8042_MUX_PORT_NO + idx;
@ -1386,13 +1390,13 @@ static int __init i8042_create_aux_port(int idx)
return 0;
}
static void __init i8042_free_kbd_port(void)
static void i8042_free_kbd_port(void)
{
kfree(i8042_ports[I8042_KBD_PORT_NO].serio);
i8042_ports[I8042_KBD_PORT_NO].serio = NULL;
}
static void __init i8042_free_aux_ports(void)
static void i8042_free_aux_ports(void)
{
int i;
@ -1402,7 +1406,7 @@ static void __init i8042_free_aux_ports(void)
}
}
static void __init i8042_register_ports(void)
static void i8042_register_ports(void)
{
int i;
@ -1443,7 +1447,7 @@ static void i8042_free_irqs(void)
i8042_aux_irq_registered = i8042_kbd_irq_registered = false;
}
static int __init i8042_setup_aux(void)
static int i8042_setup_aux(void)
{
int (*aux_enable)(void);
int error;
@ -1485,7 +1489,7 @@ static int __init i8042_setup_aux(void)
return error;
}
static int __init i8042_setup_kbd(void)
static int i8042_setup_kbd(void)
{
int error;
@ -1535,7 +1539,7 @@ static int i8042_kbd_bind_notifier(struct notifier_block *nb,
return 0;
}
static int __init i8042_probe(struct platform_device *dev)
static int i8042_probe(struct platform_device *dev)
{
int error;
@ -1600,6 +1604,7 @@ static struct platform_driver i8042_driver = {
.pm = &i8042_pm_ops,
#endif
},
.probe = i8042_probe,
.remove = i8042_remove,
.shutdown = i8042_shutdown,
};
@ -1610,7 +1615,6 @@ static struct notifier_block i8042_kbd_bind_notifier_block = {
static int __init i8042_init(void)
{
struct platform_device *pdev;
int err;
dbg_init();
@ -1626,17 +1630,29 @@ static int __init i8042_init(void)
/* Set this before creating the dev to allow i8042_command to work right away */
i8042_present = true;
pdev = platform_create_bundle(&i8042_driver, i8042_probe, NULL, 0, NULL, 0);
if (IS_ERR(pdev)) {
err = PTR_ERR(pdev);
err = platform_driver_register(&i8042_driver);
if (err)
goto err_platform_exit;
i8042_platform_device = platform_device_alloc("i8042", -1);
if (!i8042_platform_device) {
err = -ENOMEM;
goto err_unregister_driver;
}
err = platform_device_add(i8042_platform_device);
if (err)
goto err_free_device;
bus_register_notifier(&serio_bus, &i8042_kbd_bind_notifier_block);
panic_blink = i8042_panic_blink;
return 0;
err_free_device:
platform_device_put(i8042_platform_device);
err_unregister_driver:
platform_driver_unregister(&i8042_driver);
err_platform_exit:
i8042_platform_exit();
return err;

View File

@ -1882,7 +1882,7 @@ static int mxt_read_info_block(struct mxt_data *data)
if (error) {
dev_err(&client->dev, "Error %d parsing object table\n", error);
mxt_free_object_table(data);
goto err_free_mem;
return error;
}
data->object_table = (struct mxt_object *)(id_buf + MXT_OBJECT_START);

View File

@ -117,6 +117,19 @@
#define ELAN_POWERON_DELAY_USEC 500
#define ELAN_RESET_DELAY_MSEC 20
/* FW boot code version */
#define BC_VER_H_BYTE_FOR_EKTH3900x1_I2C 0x72
#define BC_VER_H_BYTE_FOR_EKTH3900x2_I2C 0x82
#define BC_VER_H_BYTE_FOR_EKTH3900x3_I2C 0x92
#define BC_VER_H_BYTE_FOR_EKTH5312x1_I2C 0x6D
#define BC_VER_H_BYTE_FOR_EKTH5312x2_I2C 0x6E
#define BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C 0x77
#define BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C 0x78
#define BC_VER_H_BYTE_FOR_EKTH5312x1_I2C_USB 0x67
#define BC_VER_H_BYTE_FOR_EKTH5312x2_I2C_USB 0x68
#define BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C_USB 0x74
#define BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C_USB 0x75
enum elants_chip_id {
EKTH3500,
EKTF3624,
@ -736,6 +749,37 @@ static int elants_i2c_validate_remark_id(struct elants_data *ts,
return 0;
}
static bool elants_i2c_should_check_remark_id(struct elants_data *ts)
{
struct i2c_client *client = ts->client;
const u8 bootcode_version = ts->iap_version;
bool check;
/* I2C eKTH3900 and eKTH5312 are NOT support Remark ID */
if ((bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x1_I2C) ||
(bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x2_I2C) ||
(bootcode_version == BC_VER_H_BYTE_FOR_EKTH3900x3_I2C) ||
(bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x1_I2C) ||
(bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x2_I2C) ||
(bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C) ||
(bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C) ||
(bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x1_I2C_USB) ||
(bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312x2_I2C_USB) ||
(bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx1_I2C_USB) ||
(bootcode_version == BC_VER_H_BYTE_FOR_EKTH5312cx2_I2C_USB)) {
dev_dbg(&client->dev,
"eKTH3900/eKTH5312(0x%02x) are not support remark id\n",
bootcode_version);
check = false;
} else if (bootcode_version >= 0x60) {
check = true;
} else {
check = false;
}
return check;
}
static int elants_i2c_do_update_firmware(struct i2c_client *client,
const struct firmware *fw,
bool force)
@ -749,7 +793,7 @@ static int elants_i2c_do_update_firmware(struct i2c_client *client,
u16 send_id;
int page, n_fw_pages;
int error;
bool check_remark_id = ts->iap_version >= 0x60;
bool check_remark_id = elants_i2c_should_check_remark_id(ts);
/* Recovery mode detection! */
if (force) {

View File

@ -102,6 +102,7 @@ static const struct goodix_chip_id goodix_chip_ids[] = {
{ .id = "911", .data = &gt911_chip_data },
{ .id = "9271", .data = &gt911_chip_data },
{ .id = "9110", .data = &gt911_chip_data },
{ .id = "9111", .data = &gt911_chip_data },
{ .id = "927", .data = &gt911_chip_data },
{ .id = "928", .data = &gt911_chip_data },
@ -650,10 +651,16 @@ int goodix_reset_no_int_sync(struct goodix_ts_data *ts)
usleep_range(6000, 10000); /* T4: > 5ms */
/* end select I2C slave addr */
error = gpiod_direction_input(ts->gpiod_rst);
if (error)
goto error;
/*
* Put the reset pin back in to input / high-impedance mode to save
* power. Only do this in the non ACPI case since some ACPI boards
* don't have a pull-up, so there the reset pin must stay active-high.
*/
if (ts->irq_pin_access_method == IRQ_PIN_ACCESS_GPIO) {
error = gpiod_direction_input(ts->gpiod_rst);
if (error)
goto error;
}
return 0;
@ -787,6 +794,14 @@ static int goodix_add_acpi_gpio_mappings(struct goodix_ts_data *ts)
return -EINVAL;
}
/*
* Normally we put the reset pin in input / high-impedance mode to save
* power. But some x86/ACPI boards don't have a pull-up, so for the ACPI
* case, leave the pin as is. This results in the pin not being touched
* at all on x86/ACPI boards, except when needed for error-recover.
*/
ts->gpiod_rst_flags = GPIOD_ASIS;
return devm_acpi_dev_add_driver_gpios(dev, gpio_mapping);
}
#else
@ -812,6 +827,12 @@ static int goodix_get_gpio_config(struct goodix_ts_data *ts)
return -EINVAL;
dev = &ts->client->dev;
/*
* By default we request the reset pin as input, leaving it in
* high-impedance when not resetting the controller to save power.
*/
ts->gpiod_rst_flags = GPIOD_IN;
ts->avdd28 = devm_regulator_get(dev, "AVDD28");
if (IS_ERR(ts->avdd28)) {
error = PTR_ERR(ts->avdd28);
@ -849,7 +870,7 @@ static int goodix_get_gpio_config(struct goodix_ts_data *ts)
ts->gpiod_int = gpiod;
/* Get the reset line GPIO pin number */
gpiod = devm_gpiod_get_optional(dev, GOODIX_GPIO_RST_NAME, GPIOD_IN);
gpiod = devm_gpiod_get_optional(dev, GOODIX_GPIO_RST_NAME, ts->gpiod_rst_flags);
if (IS_ERR(gpiod)) {
error = PTR_ERR(gpiod);
if (error != -EPROBE_DEFER)

View File

@ -87,6 +87,7 @@ struct goodix_ts_data {
struct gpio_desc *gpiod_rst;
int gpio_count;
int gpio_int_idx;
enum gpiod_flags gpiod_rst_flags;
char id[GOODIX_ID_MAX_LEN + 1];
char cfg_name[64];
u16 version;

View File

@ -207,7 +207,7 @@ static int goodix_firmware_upload(struct goodix_ts_data *ts)
error = goodix_reset_no_int_sync(ts);
if (error)
return error;
goto release;
error = goodix_enter_upload_mode(ts->client);
if (error)

View File

@ -381,7 +381,7 @@ mISDNInit(void)
err = mISDN_inittimer(&debug);
if (err)
goto error2;
err = l1_init(&debug);
err = Isdnl1_Init(&debug);
if (err)
goto error3;
err = Isdnl2_Init(&debug);
@ -395,7 +395,7 @@ mISDNInit(void)
error5:
Isdnl2_cleanup();
error4:
l1_cleanup();
Isdnl1_cleanup();
error3:
mISDN_timer_cleanup();
error2:
@ -408,7 +408,7 @@ static void mISDN_cleanup(void)
{
misdn_sock_cleanup();
Isdnl2_cleanup();
l1_cleanup();
Isdnl1_cleanup();
mISDN_timer_cleanup();
class_unregister(&mISDN_class);

View File

@ -60,8 +60,8 @@ struct Bprotocol *get_Bprotocol4id(u_int);
extern int mISDN_inittimer(u_int *);
extern void mISDN_timer_cleanup(void);
extern int l1_init(u_int *);
extern void l1_cleanup(void);
extern int Isdnl1_Init(u_int *);
extern void Isdnl1_cleanup(void);
extern int Isdnl2_Init(u_int *);
extern void Isdnl2_cleanup(void);

View File

@ -398,7 +398,7 @@ create_l1(struct dchannel *dch, dchannel_l1callback *dcb) {
EXPORT_SYMBOL(create_l1);
int
l1_init(u_int *deb)
Isdnl1_Init(u_int *deb)
{
debug = deb;
l1fsm_s.state_count = L1S_STATE_COUNT;
@ -409,7 +409,7 @@ l1_init(u_int *deb)
}
void
l1_cleanup(void)
Isdnl1_cleanup(void)
{
mISDN_FsmFree(&l1fsm_s);
}

View File

@ -2264,7 +2264,7 @@ void mmc_start_host(struct mmc_host *host)
_mmc_detect_change(host, 0, false);
}
void mmc_stop_host(struct mmc_host *host)
void __mmc_stop_host(struct mmc_host *host)
{
if (host->slot.cd_irq >= 0) {
mmc_gpio_set_cd_wake(host, false);
@ -2273,6 +2273,11 @@ void mmc_stop_host(struct mmc_host *host)
host->rescan_disable = 1;
cancel_delayed_work_sync(&host->detect);
}
void mmc_stop_host(struct mmc_host *host)
{
__mmc_stop_host(host);
/* clear pm flags now and let card drivers set them as needed */
host->pm_flags = 0;

View File

@ -70,6 +70,7 @@ static inline void mmc_delay(unsigned int ms)
void mmc_rescan(struct work_struct *work);
void mmc_start_host(struct mmc_host *host);
void __mmc_stop_host(struct mmc_host *host);
void mmc_stop_host(struct mmc_host *host);
void _mmc_detect_change(struct mmc_host *host, unsigned long delay,

View File

@ -80,9 +80,18 @@ static void mmc_host_classdev_release(struct device *dev)
kfree(host);
}
static int mmc_host_classdev_shutdown(struct device *dev)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
__mmc_stop_host(host);
return 0;
}
static struct class mmc_host_class = {
.name = "mmc_host",
.dev_release = mmc_host_classdev_release,
.shutdown_pre = mmc_host_classdev_shutdown,
.pm = MMC_HOST_CLASS_DEV_PM_OPS,
};

View File

@ -135,6 +135,7 @@ static void meson_mx_sdhc_start_cmd(struct mmc_host *mmc,
struct mmc_command *cmd)
{
struct meson_mx_sdhc_host *host = mmc_priv(mmc);
bool manual_stop = false;
u32 ictl, send;
int pack_len;
@ -172,12 +173,27 @@ static void meson_mx_sdhc_start_cmd(struct mmc_host *mmc,
else
/* software flush: */
ictl |= MESON_SDHC_ICTL_DATA_XFER_OK;
/*
* Mimic the logic from the vendor driver where (only)
* SD_IO_RW_EXTENDED commands with more than one block set the
* MESON_SDHC_MISC_MANUAL_STOP bit. This fixes the firmware
* download in the brcmfmac driver for a BCM43362/1 card.
* Without this sdio_memcpy_toio() (with a size of 219557
* bytes) times out if MESON_SDHC_MISC_MANUAL_STOP is not set.
*/
manual_stop = cmd->data->blocks > 1 &&
cmd->opcode == SD_IO_RW_EXTENDED;
} else {
pack_len = 0;
ictl |= MESON_SDHC_ICTL_RESP_OK;
}
regmap_update_bits(host->regmap, MESON_SDHC_MISC,
MESON_SDHC_MISC_MANUAL_STOP,
manual_stop ? MESON_SDHC_MISC_MANUAL_STOP : 0);
if (cmd->opcode == MMC_STOP_TRANSMISSION)
send |= MESON_SDHC_SEND_DATA_STOP;

View File

@ -441,6 +441,8 @@ static int sdmmc_dlyb_phase_tuning(struct mmci_host *host, u32 opcode)
return -EINVAL;
}
writel_relaxed(0, dlyb->base + DLYB_CR);
phase = end_of_len - max_len / 2;
sdmmc_dlyb_set_cfgr(dlyb, dlyb->unit, phase, false);

View File

@ -356,23 +356,6 @@ static void tegra_sdhci_set_tap(struct sdhci_host *host, unsigned int tap)
}
}
static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc,
struct mmc_ios *ios)
{
struct sdhci_host *host = mmc_priv(mmc);
u32 val;
val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);
if (ios->enhanced_strobe)
val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;
else
val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;
sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);
}
static void tegra_sdhci_reset(struct sdhci_host *host, u8 mask)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
@ -793,6 +776,32 @@ static void tegra_sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
}
}
static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc,
struct mmc_ios *ios)
{
struct sdhci_host *host = mmc_priv(mmc);
u32 val;
val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);
if (ios->enhanced_strobe) {
val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;
/*
* When CMD13 is sent from mmc_select_hs400es() after
* switching to HS400ES mode, the bus is operating at
* either MMC_HIGH_26_MAX_DTR or MMC_HIGH_52_MAX_DTR.
* To meet Tegra SDHCI requirement at HS400ES mode, force SDHCI
* interface clock to MMC_HS200_MAX_DTR (200 MHz) so that host
* controller CAR clock and the interface clock are rate matched.
*/
tegra_sdhci_set_clock(host, MMC_HS200_MAX_DTR);
} else {
val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;
}
sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);
}
static unsigned int tegra_sdhci_get_max_clock(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);

View File

@ -1526,7 +1526,7 @@ static int bond_option_ad_actor_system_set(struct bonding *bond,
mac = (u8 *)&newval->value;
}
if (!is_valid_ether_addr(mac))
if (is_multicast_ether_addr(mac))
goto err;
netdev_dbg(bond->dev, "Setting ad_actor_system to %pM\n", mac);

View File

@ -366,6 +366,10 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
if (!buff->is_eop) {
buff_ = buff;
do {
if (buff_->next >= self->size) {
err = -EIO;
goto err_exit;
}
next_ = buff_->next,
buff_ = &self->buff_ring[next_];
is_rsc_completed =
@ -389,6 +393,10 @@ int aq_ring_rx_clean(struct aq_ring_s *self,
(buff->is_lro && buff->is_cso_err)) {
buff_ = buff;
do {
if (buff_->next >= self->size) {
err = -EIO;
goto err_exit;
}
next_ = buff_->next,
buff_ = &self->buff_ring[next_];

View File

@ -1913,15 +1913,12 @@ static int ag71xx_probe(struct platform_device *pdev)
ag->mac_reset = devm_reset_control_get(&pdev->dev, "mac");
if (IS_ERR(ag->mac_reset)) {
netif_err(ag, probe, ndev, "missing mac reset\n");
err = PTR_ERR(ag->mac_reset);
goto err_free;
return PTR_ERR(ag->mac_reset);
}
ag->mac_base = devm_ioremap(&pdev->dev, res->start, resource_size(res));
if (!ag->mac_base) {
err = -ENOMEM;
goto err_free;
}
if (!ag->mac_base)
return -ENOMEM;
ndev->irq = platform_get_irq(pdev, 0);
err = devm_request_irq(&pdev->dev, ndev->irq, ag71xx_interrupt,
@ -1929,7 +1926,7 @@ static int ag71xx_probe(struct platform_device *pdev)
if (err) {
netif_err(ag, probe, ndev, "unable to request IRQ %d\n",
ndev->irq);
goto err_free;
return err;
}
ndev->netdev_ops = &ag71xx_netdev_ops;
@ -1957,10 +1954,8 @@ static int ag71xx_probe(struct platform_device *pdev)
ag->stop_desc = dmam_alloc_coherent(&pdev->dev,
sizeof(struct ag71xx_desc),
&ag->stop_desc_dma, GFP_KERNEL);
if (!ag->stop_desc) {
err = -ENOMEM;
goto err_free;
}
if (!ag->stop_desc)
return -ENOMEM;
ag->stop_desc->data = 0;
ag->stop_desc->ctrl = 0;
@ -1975,7 +1970,7 @@ static int ag71xx_probe(struct platform_device *pdev)
err = of_get_phy_mode(np, &ag->phy_if_mode);
if (err) {
netif_err(ag, probe, ndev, "missing phy-mode property in DT\n");
goto err_free;
return err;
}
netif_napi_add(ndev, &ag->napi, ag71xx_poll, AG71XX_NAPI_WEIGHT);
@ -1983,7 +1978,7 @@ static int ag71xx_probe(struct platform_device *pdev)
err = clk_prepare_enable(ag->clk_eth);
if (err) {
netif_err(ag, probe, ndev, "Failed to enable eth clk.\n");
goto err_free;
return err;
}
ag71xx_wr(ag, AG71XX_REG_MAC_CFG1, 0);
@ -2019,8 +2014,6 @@ static int ag71xx_probe(struct platform_device *pdev)
ag71xx_mdio_remove(ag);
err_put_clk:
clk_disable_unprepare(ag->clk_eth);
err_free:
free_netdev(ndev);
return err;
}

View File

@ -1805,7 +1805,7 @@ static int fman_port_probe(struct platform_device *of_dev)
fman = dev_get_drvdata(&fm_pdev->dev);
if (!fman) {
err = -EINVAL;
goto return_err;
goto put_device;
}
err = of_property_read_u32(port_node, "cell-index", &val);
@ -1813,7 +1813,7 @@ static int fman_port_probe(struct platform_device *of_dev)
dev_err(port->dev, "%s: reading cell-index for %pOF failed\n",
__func__, port_node);
err = -EINVAL;
goto return_err;
goto put_device;
}
port_id = (u8)val;
port->dts_params.id = port_id;
@ -1847,7 +1847,7 @@ static int fman_port_probe(struct platform_device *of_dev)
} else {
dev_err(port->dev, "%s: Illegal port type\n", __func__);
err = -EINVAL;
goto return_err;
goto put_device;
}
port->dts_params.type = port_type;
@ -1861,7 +1861,7 @@ static int fman_port_probe(struct platform_device *of_dev)
dev_err(port->dev, "%s: incorrect qman-channel-id\n",
__func__);
err = -EINVAL;
goto return_err;
goto put_device;
}
port->dts_params.qman_channel_id = qman_channel_id;
}
@ -1871,7 +1871,7 @@ static int fman_port_probe(struct platform_device *of_dev)
dev_err(port->dev, "%s: of_address_to_resource() failed\n",
__func__);
err = -ENOMEM;
goto return_err;
goto put_device;
}
port->dts_params.fman = fman;
@ -1896,6 +1896,8 @@ static int fman_port_probe(struct platform_device *of_dev)
return 0;
put_device:
put_device(&fm_pdev->dev);
return_err:
of_node_put(port_node);
free_port:

View File

@ -738,10 +738,7 @@ int gve_adminq_describe_device(struct gve_priv *priv)
* is not set to GqiRda, choose the queue format in a priority order:
* DqoRda, GqiRda, GqiQpl. Use GqiQpl as default.
*/
if (priv->queue_format == GVE_GQI_RDA_FORMAT) {
dev_info(&priv->pdev->dev,
"Driver is running with GQI RDA queue format.\n");
} else if (dev_op_dqo_rda) {
if (dev_op_dqo_rda) {
priv->queue_format = GVE_DQO_RDA_FORMAT;
dev_info(&priv->pdev->dev,
"Driver is running with DQO RDA queue format.\n");
@ -753,6 +750,9 @@ int gve_adminq_describe_device(struct gve_priv *priv)
"Driver is running with GQI RDA queue format.\n");
supported_features_mask =
be32_to_cpu(dev_op_gqi_rda->supported_features_mask);
} else if (priv->queue_format == GVE_GQI_RDA_FORMAT) {
dev_info(&priv->pdev->dev,
"Driver is running with GQI RDA queue format.\n");
} else {
priv->queue_format = GVE_GQI_QPL_FORMAT;
if (dev_op_gqi_qpl)

View File

@ -6,6 +6,18 @@
#include "ice_lib.h"
#include "ice_dcb_lib.h"
static bool ice_alloc_rx_buf_zc(struct ice_rx_ring *rx_ring)
{
rx_ring->xdp_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->xdp_buf), GFP_KERNEL);
return !!rx_ring->xdp_buf;
}
static bool ice_alloc_rx_buf(struct ice_rx_ring *rx_ring)
{
rx_ring->rx_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL);
return !!rx_ring->rx_buf;
}
/**
* __ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI
* @qs_cfg: gathered variables needed for PF->VSI queues assignment
@ -492,8 +504,11 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,
ring->q_index, ring->q_vector->napi.napi_id);
kfree(ring->rx_buf);
ring->xsk_pool = ice_xsk_pool(ring);
if (ring->xsk_pool) {
if (!ice_alloc_rx_buf_zc(ring))
return -ENOMEM;
xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq);
ring->rx_buf_len =
@ -508,6 +523,8 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring)
dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n",
ring->q_index);
} else {
if (!ice_alloc_rx_buf(ring))
return -ENOMEM;
if (!xdp_rxq_info_is_reg(&ring->xdp_rxq))
/* coverity[check_return] */
xdp_rxq_info_reg(&ring->xdp_rxq,

View File

@ -419,7 +419,10 @@ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring)
}
rx_skip_free:
memset(rx_ring->rx_buf, 0, sizeof(*rx_ring->rx_buf) * rx_ring->count);
if (rx_ring->xsk_pool)
memset(rx_ring->xdp_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->xdp_buf)));
else
memset(rx_ring->rx_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->rx_buf)));
/* Zero out the descriptor ring */
size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc),
@ -446,8 +449,13 @@ void ice_free_rx_ring(struct ice_rx_ring *rx_ring)
if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))
xdp_rxq_info_unreg(&rx_ring->xdp_rxq);
rx_ring->xdp_prog = NULL;
devm_kfree(rx_ring->dev, rx_ring->rx_buf);
rx_ring->rx_buf = NULL;
if (rx_ring->xsk_pool) {
kfree(rx_ring->xdp_buf);
rx_ring->xdp_buf = NULL;
} else {
kfree(rx_ring->rx_buf);
rx_ring->rx_buf = NULL;
}
if (rx_ring->desc) {
size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc),
@ -475,8 +483,7 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring)
/* warn if we are about to overwrite the pointer */
WARN_ON(rx_ring->rx_buf);
rx_ring->rx_buf =
devm_kcalloc(dev, sizeof(*rx_ring->rx_buf), rx_ring->count,
GFP_KERNEL);
kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL);
if (!rx_ring->rx_buf)
return -ENOMEM;
@ -505,7 +512,7 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring)
return 0;
err:
devm_kfree(dev, rx_ring->rx_buf);
kfree(rx_ring->rx_buf);
rx_ring->rx_buf = NULL;
return -ENOMEM;
}

View File

@ -24,7 +24,6 @@
#define ICE_MAX_DATA_PER_TXD_ALIGNED \
(~(ICE_MAX_READ_REQ_SIZE - 1) & ICE_MAX_DATA_PER_TXD)
#define ICE_RX_BUF_WRITE 16 /* Must be power of 2 */
#define ICE_MAX_TXQ_PER_TXQG 128
/* Attempt to maximize the headroom available for incoming frames. We use a 2K

View File

@ -12,6 +12,11 @@
#include "ice_txrx_lib.h"
#include "ice_lib.h"
static struct xdp_buff **ice_xdp_buf(struct ice_rx_ring *rx_ring, u32 idx)
{
return &rx_ring->xdp_buf[idx];
}
/**
* ice_qp_reset_stats - Resets all stats for rings of given index
* @vsi: VSI that contains rings of interest
@ -372,7 +377,7 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
dma_addr_t dma;
rx_desc = ICE_RX_DESC(rx_ring, ntu);
xdp = &rx_ring->xdp_buf[ntu];
xdp = ice_xdp_buf(rx_ring, ntu);
nb_buffs = min_t(u16, count, rx_ring->count - ntu);
nb_buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, nb_buffs);
@ -390,14 +395,9 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
}
ntu += nb_buffs;
if (ntu == rx_ring->count) {
rx_desc = ICE_RX_DESC(rx_ring, 0);
xdp = rx_ring->xdp_buf;
if (ntu == rx_ring->count)
ntu = 0;
}
/* clear the status bits for the next_to_use descriptor */
rx_desc->wb.status_error0 = 0;
ice_release_rx_desc(rx_ring, ntu);
return count == nb_buffs;
@ -419,19 +419,18 @@ static void ice_bump_ntc(struct ice_rx_ring *rx_ring)
/**
* ice_construct_skb_zc - Create an sk_buff from zero-copy buffer
* @rx_ring: Rx ring
* @xdp_arr: Pointer to the SW ring of xdp_buff pointers
* @xdp: Pointer to XDP buffer
*
* This function allocates a new skb from a zero-copy Rx buffer.
*
* Returns the skb on success, NULL on failure.
*/
static struct sk_buff *
ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp_arr)
ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)
{
struct xdp_buff *xdp = *xdp_arr;
unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start;
unsigned int metasize = xdp->data - xdp->data_meta;
unsigned int datasize = xdp->data_end - xdp->data;
unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start;
struct sk_buff *skb;
skb = __napi_alloc_skb(&rx_ring->q_vector->napi, datasize_hard,
@ -445,7 +444,6 @@ ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp_arr)
skb_metadata_set(skb, metasize);
xsk_buff_free(xdp);
*xdp_arr = NULL;
return skb;
}
@ -507,7 +505,6 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
{
unsigned int total_rx_bytes = 0, total_rx_packets = 0;
u16 cleaned_count = ICE_DESC_UNUSED(rx_ring);
struct ice_tx_ring *xdp_ring;
unsigned int xdp_xmit = 0;
struct bpf_prog *xdp_prog;
@ -522,7 +519,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
while (likely(total_rx_packets < (unsigned int)budget)) {
union ice_32b_rx_flex_desc *rx_desc;
unsigned int size, xdp_res = 0;
struct xdp_buff **xdp;
struct xdp_buff *xdp;
struct sk_buff *skb;
u16 stat_err_bits;
u16 vlan_tag = 0;
@ -540,31 +537,35 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
*/
dma_rmb();
xdp = *ice_xdp_buf(rx_ring, rx_ring->next_to_clean);
size = le16_to_cpu(rx_desc->wb.pkt_len) &
ICE_RX_FLX_DESC_PKT_LEN_M;
if (!size)
break;
if (!size) {
xdp->data = NULL;
xdp->data_end = NULL;
xdp->data_hard_start = NULL;
xdp->data_meta = NULL;
goto construct_skb;
}
xdp = &rx_ring->xdp_buf[rx_ring->next_to_clean];
xsk_buff_set_size(*xdp, size);
xsk_buff_dma_sync_for_cpu(*xdp, rx_ring->xsk_pool);
xsk_buff_set_size(xdp, size);
xsk_buff_dma_sync_for_cpu(xdp, rx_ring->xsk_pool);
xdp_res = ice_run_xdp_zc(rx_ring, *xdp, xdp_prog, xdp_ring);
xdp_res = ice_run_xdp_zc(rx_ring, xdp, xdp_prog, xdp_ring);
if (xdp_res) {
if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR))
xdp_xmit |= xdp_res;
else
xsk_buff_free(*xdp);
xsk_buff_free(xdp);
*xdp = NULL;
total_rx_bytes += size;
total_rx_packets++;
cleaned_count++;
ice_bump_ntc(rx_ring);
continue;
}
construct_skb:
/* XDP_PASS path */
skb = ice_construct_skb_zc(rx_ring, xdp);
if (!skb) {
@ -572,7 +573,6 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
break;
}
cleaned_count++;
ice_bump_ntc(rx_ring);
if (eth_skb_pad(skb)) {
@ -594,8 +594,7 @@ int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)
ice_receive_skb(rx_ring, skb, vlan_tag);
}
if (cleaned_count >= ICE_RX_BUF_WRITE)
failure = !ice_alloc_rx_bufs_zc(rx_ring, cleaned_count);
failure = !ice_alloc_rx_bufs_zc(rx_ring, ICE_DESC_UNUSED(rx_ring));
ice_finalize_xdp_rx(xdp_ring, xdp_xmit);
ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes);
@ -811,15 +810,14 @@ bool ice_xsk_any_rx_ring_ena(struct ice_vsi *vsi)
*/
void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring)
{
u16 i;
u16 count_mask = rx_ring->count - 1;
u16 ntc = rx_ring->next_to_clean;
u16 ntu = rx_ring->next_to_use;
for (i = 0; i < rx_ring->count; i++) {
struct xdp_buff **xdp = &rx_ring->xdp_buf[i];
for ( ; ntc != ntu; ntc = (ntc + 1) & count_mask) {
struct xdp_buff *xdp = *ice_xdp_buf(rx_ring, ntc);
if (!xdp)
continue;
*xdp = NULL;
xsk_buff_free(xdp);
}
}

View File

@ -9254,7 +9254,7 @@ static int __maybe_unused igb_suspend(struct device *dev)
return __igb_shutdown(to_pci_dev(dev), NULL, 0);
}
static int __maybe_unused igb_resume(struct device *dev)
static int __maybe_unused __igb_resume(struct device *dev, bool rpm)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct net_device *netdev = pci_get_drvdata(pdev);
@ -9297,17 +9297,24 @@ static int __maybe_unused igb_resume(struct device *dev)
wr32(E1000_WUS, ~0);
rtnl_lock();
if (!rpm)
rtnl_lock();
if (!err && netif_running(netdev))
err = __igb_open(netdev, true);
if (!err)
netif_device_attach(netdev);
rtnl_unlock();
if (!rpm)
rtnl_unlock();
return err;
}
static int __maybe_unused igb_resume(struct device *dev)
{
return __igb_resume(dev, false);
}
static int __maybe_unused igb_runtime_idle(struct device *dev)
{
struct net_device *netdev = dev_get_drvdata(dev);
@ -9326,7 +9333,7 @@ static int __maybe_unused igb_runtime_suspend(struct device *dev)
static int __maybe_unused igb_runtime_resume(struct device *dev)
{
return igb_resume(dev);
return __igb_resume(dev, true);
}
static void igb_shutdown(struct pci_dev *pdev)
@ -9442,7 +9449,7 @@ static pci_ers_result_t igb_io_error_detected(struct pci_dev *pdev,
* @pdev: Pointer to PCI device
*
* Restart the card from scratch, as if from a cold-boot. Implementation
* resembles the first-half of the igb_resume routine.
* resembles the first-half of the __igb_resume routine.
**/
static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev)
{
@ -9482,7 +9489,7 @@ static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev)
*
* This callback is called when the error recovery driver tells us that
* its OK to resume normal operation. Implementation resembles the
* second-half of the igb_resume routine.
* second-half of the __igb_resume routine.
*/
static void igb_io_resume(struct pci_dev *pdev)
{

View File

@ -5467,6 +5467,9 @@ static irqreturn_t igc_intr_msi(int irq, void *data)
mod_timer(&adapter->watchdog_timer, jiffies + 1);
}
if (icr & IGC_ICR_TS)
igc_tsync_interrupt(adapter);
napi_schedule(&q_vector->napi);
return IRQ_HANDLED;
@ -5510,6 +5513,9 @@ static irqreturn_t igc_intr(int irq, void *data)
mod_timer(&adapter->watchdog_timer, jiffies + 1);
}
if (icr & IGC_ICR_TS)
igc_tsync_interrupt(adapter);
napi_schedule(&q_vector->napi);
return IRQ_HANDLED;

View File

@ -768,7 +768,20 @@ int igc_ptp_get_ts_config(struct net_device *netdev, struct ifreq *ifr)
*/
static bool igc_is_crosststamp_supported(struct igc_adapter *adapter)
{
return IS_ENABLED(CONFIG_X86_TSC) ? pcie_ptm_enabled(adapter->pdev) : false;
if (!IS_ENABLED(CONFIG_X86_TSC))
return false;
/* FIXME: it was noticed that enabling support for PCIe PTM in
* some i225-V models could cause lockups when bringing the
* interface up/down. There should be no downsides to
* disabling crosstimestamping support for i225-V, as it
* doesn't have any PTP support. That way we gain some time
* while root causing the issue.
*/
if (adapter->pdev->device == IGC_DEV_ID_I225_V)
return false;
return pcie_ptm_enabled(adapter->pdev);
}
static struct system_counterval_t igc_device_tstamp_to_system(u64 tstamp)

View File

@ -71,6 +71,8 @@ struct xrx200_priv {
struct xrx200_chan chan_tx;
struct xrx200_chan chan_rx;
u16 rx_buf_size;
struct net_device *net_dev;
struct device *dev;
@ -97,6 +99,16 @@ static void xrx200_pmac_mask(struct xrx200_priv *priv, u32 clear, u32 set,
xrx200_pmac_w32(priv, val, offset);
}
static int xrx200_max_frame_len(int mtu)
{
return VLAN_ETH_HLEN + mtu;
}
static int xrx200_buffer_size(int mtu)
{
return round_up(xrx200_max_frame_len(mtu), 4 * XRX200_DMA_BURST_LEN);
}
/* drop all the packets from the DMA ring */
static void xrx200_flush_dma(struct xrx200_chan *ch)
{
@ -109,8 +121,7 @@ static void xrx200_flush_dma(struct xrx200_chan *ch)
break;
desc->ctl = LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) |
(ch->priv->net_dev->mtu + VLAN_ETH_HLEN +
ETH_FCS_LEN);
ch->priv->rx_buf_size;
ch->dma.desc++;
ch->dma.desc %= LTQ_DESC_NUM;
}
@ -158,21 +169,21 @@ static int xrx200_close(struct net_device *net_dev)
static int xrx200_alloc_skb(struct xrx200_chan *ch)
{
int len = ch->priv->net_dev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN;
struct sk_buff *skb = ch->skb[ch->dma.desc];
struct xrx200_priv *priv = ch->priv;
dma_addr_t mapping;
int ret = 0;
ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(ch->priv->net_dev,
len);
ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(priv->net_dev,
priv->rx_buf_size);
if (!ch->skb[ch->dma.desc]) {
ret = -ENOMEM;
goto skip;
}
mapping = dma_map_single(ch->priv->dev, ch->skb[ch->dma.desc]->data,
len, DMA_FROM_DEVICE);
if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) {
mapping = dma_map_single(priv->dev, ch->skb[ch->dma.desc]->data,
priv->rx_buf_size, DMA_FROM_DEVICE);
if (unlikely(dma_mapping_error(priv->dev, mapping))) {
dev_kfree_skb_any(ch->skb[ch->dma.desc]);
ch->skb[ch->dma.desc] = skb;
ret = -ENOMEM;
@ -184,7 +195,7 @@ static int xrx200_alloc_skb(struct xrx200_chan *ch)
wmb();
skip:
ch->dma.desc_base[ch->dma.desc].ctl =
LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | len;
LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | priv->rx_buf_size;
return ret;
}
@ -213,7 +224,7 @@ static int xrx200_hw_receive(struct xrx200_chan *ch)
skb->protocol = eth_type_trans(skb, net_dev);
netif_receive_skb(skb);
net_dev->stats.rx_packets++;
net_dev->stats.rx_bytes += len - ETH_FCS_LEN;
net_dev->stats.rx_bytes += len;
return 0;
}
@ -356,6 +367,7 @@ xrx200_change_mtu(struct net_device *net_dev, int new_mtu)
int ret = 0;
net_dev->mtu = new_mtu;
priv->rx_buf_size = xrx200_buffer_size(new_mtu);
if (new_mtu <= old_mtu)
return ret;
@ -375,6 +387,7 @@ xrx200_change_mtu(struct net_device *net_dev, int new_mtu)
ret = xrx200_alloc_skb(ch_rx);
if (ret) {
net_dev->mtu = old_mtu;
priv->rx_buf_size = xrx200_buffer_size(old_mtu);
break;
}
dev_kfree_skb_any(skb);
@ -505,7 +518,8 @@ static int xrx200_probe(struct platform_device *pdev)
net_dev->netdev_ops = &xrx200_netdev_ops;
SET_NETDEV_DEV(net_dev, dev);
net_dev->min_mtu = ETH_ZLEN;
net_dev->max_mtu = XRX200_DMA_DATA_LEN - VLAN_ETH_HLEN - ETH_FCS_LEN;
net_dev->max_mtu = XRX200_DMA_DATA_LEN - xrx200_max_frame_len(0);
priv->rx_buf_size = xrx200_buffer_size(ETH_DATA_LEN);
/* load the memory ranges */
priv->pmac_reg = devm_platform_get_and_ioremap_resource(pdev, 0, NULL);

View File

@ -54,12 +54,14 @@ int prestera_port_pvid_set(struct prestera_port *port, u16 vid)
struct prestera_port *prestera_port_find_by_hwid(struct prestera_switch *sw,
u32 dev_id, u32 hw_id)
{
struct prestera_port *port = NULL;
struct prestera_port *port = NULL, *tmp;
read_lock(&sw->port_list_lock);
list_for_each_entry(port, &sw->port_list, list) {
if (port->dev_id == dev_id && port->hw_id == hw_id)
list_for_each_entry(tmp, &sw->port_list, list) {
if (tmp->dev_id == dev_id && tmp->hw_id == hw_id) {
port = tmp;
break;
}
}
read_unlock(&sw->port_list_lock);
@ -68,12 +70,14 @@ struct prestera_port *prestera_port_find_by_hwid(struct prestera_switch *sw,
struct prestera_port *prestera_find_port(struct prestera_switch *sw, u32 id)
{
struct prestera_port *port = NULL;
struct prestera_port *port = NULL, *tmp;
read_lock(&sw->port_list_lock);
list_for_each_entry(port, &sw->port_list, list) {
if (port->id == id)
list_for_each_entry(tmp, &sw->port_list, list) {
if (tmp->id == id) {
port = tmp;
break;
}
}
read_unlock(&sw->port_list_lock);
@ -764,23 +768,27 @@ static int prestera_netdev_port_event(struct net_device *lower,
struct net_device *dev,
unsigned long event, void *ptr)
{
struct netdev_notifier_changeupper_info *info = ptr;
struct netdev_notifier_info *info = ptr;
struct netdev_notifier_changeupper_info *cu_info;
struct prestera_port *port = netdev_priv(dev);
struct netlink_ext_ack *extack;
struct net_device *upper;
extack = netdev_notifier_info_to_extack(&info->info);
upper = info->upper_dev;
extack = netdev_notifier_info_to_extack(info);
cu_info = container_of(info,
struct netdev_notifier_changeupper_info,
info);
switch (event) {
case NETDEV_PRECHANGEUPPER:
upper = cu_info->upper_dev;
if (!netif_is_bridge_master(upper) &&
!netif_is_lag_master(upper)) {
NL_SET_ERR_MSG_MOD(extack, "Unknown upper device type");
return -EINVAL;
}
if (!info->linking)
if (!cu_info->linking)
break;
if (netdev_has_any_upper_dev(upper)) {
@ -789,7 +797,7 @@ static int prestera_netdev_port_event(struct net_device *lower,
}
if (netif_is_lag_master(upper) &&
!prestera_lag_master_check(upper, info->upper_info, extack))
!prestera_lag_master_check(upper, cu_info->upper_info, extack))
return -EOPNOTSUPP;
if (netif_is_lag_master(upper) && vlan_uses_dev(dev)) {
NL_SET_ERR_MSG_MOD(extack,
@ -805,14 +813,15 @@ static int prestera_netdev_port_event(struct net_device *lower,
break;
case NETDEV_CHANGEUPPER:
upper = cu_info->upper_dev;
if (netif_is_bridge_master(upper)) {
if (info->linking)
if (cu_info->linking)
return prestera_bridge_port_join(upper, port,
extack);
else
prestera_bridge_port_leave(upper, port);
} else if (netif_is_lag_master(upper)) {
if (info->linking)
if (cu_info->linking)
return prestera_lag_port_add(port, upper);
else
prestera_lag_port_del(port);

View File

@ -783,6 +783,8 @@ struct mlx5e_channel {
DECLARE_BITMAP(state, MLX5E_CHANNEL_NUM_STATES);
int ix;
int cpu;
/* Sync between icosq recovery and XSK enable/disable. */
struct mutex icosq_recovery_lock;
};
struct mlx5e_ptp;
@ -1014,9 +1016,6 @@ int mlx5e_create_rq(struct mlx5e_rq *rq, struct mlx5e_rq_param *param);
void mlx5e_destroy_rq(struct mlx5e_rq *rq);
struct mlx5e_sq_param;
int mlx5e_open_icosq(struct mlx5e_channel *c, struct mlx5e_params *params,
struct mlx5e_sq_param *param, struct mlx5e_icosq *sq);
void mlx5e_close_icosq(struct mlx5e_icosq *sq);
int mlx5e_open_xdpsq(struct mlx5e_channel *c, struct mlx5e_params *params,
struct mlx5e_sq_param *param, struct xsk_buff_pool *xsk_pool,
struct mlx5e_xdpsq *sq, bool is_redirect);

Some files were not shown because too many files have changed in this diff Show More