drm fixes for 6.12-rc6

xe:
 - Fix missing HPD interrupt enabling, bringing one PM refactor with it
 - Workaround LNL GGTT invalidation not being visible to GuC
 - Avoid getting jobs stuck without a protecting timeout
 
 ivpu:
 - Fix firewall IRQ handling
 
 panthor:
 - Fix firmware initialization wrt page sizes
 - Fix handling and reporting of dead job groups
 
 sched:
 - Guarantee forward progress via WC_MEM_RECLAIM
 
 tests:
 - Fix memory leak in drm_display_mode_from_cea_vic()
 
 amdgpu:
 - DCN 3.5 fix
 - Vangogh SMU KASAN fix
 - SMU 13 profile reporting fix
 
 mediatek:
 - Fix degradation problem of alpha blending
 - Fix color format MACROs in OVL
 - Fix get efuse issue for MT8188 DPTX
 - Fix potential NULL dereference in mtk_crtc_destroy()
 - Correct dpi power-domains property
 - Add split subschema property constraints
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEEKbZHaGwW9KfbeusDHTzWXnEhr4FAmclJX4ACgkQDHTzWXnE
 hr7rEw/9E+zpgUW52ThBDpdE4A2O0qdSUd8LtQJbkgx+Us85Hv2p9e+HOhQqrKMj
 9O4pwh47oI4vbk3uL68EQTkmmqBUbLNRhfTnps/kVPiMQLklzl3cUw7Jdq2/1E0r
 Nyh+kJFsHXFcHwB2lB2Cy+ATT8Vw28WlWni7y3hZZvzCrbOVSN16bXmTbjQ/vQYR
 TSIwJF9P9h+P96JMBXdGhZx3eyiMOqd5EfeLcR9fKRdQsY4ZVk+9Q5CKS/pyKfXP
 A9e+MHrHf7fSlFT3zptXZ9l4CLo6QGxGppYy79h8hp1oLPFmm1jnWFfDPdFSa/qm
 +ZTPncf0fz5m4YmJZFTcS+69ITQDjON6FBOu3fnoGuLG8+31/G+/1+uvrNxOID0T
 zbRcJCPPtdSpDRWsl5+29HviHTvhniM+zYZIaYl1LrCDxMAqoUjAB3dRZKgKHXmF
 KW7kILXe33m82X7V+jVuELp4m4Q062CIa5tTvDDJ3iPRFKc4HAIioD33g1DPLRPy
 +A1iwvC0EoghID6eTxVnncQO1snlgXeS4begUtx2E/LkkIH3TbtStM5aaz7tHgNT
 LTTk2YsGr+5JQQ8N1ci1clZUfSD9xgFlDwobUBGMg2Lo+tVEKhHp8c1Q3quCt214
 7PMmNOQXr243vmcs7gvl19pfy0HZRBdISabRazFYCYhiwCsvZQE=
 =U58l
 -----END PGP SIGNATURE-----

Merge tag 'drm-fixes-2024-11-02' of https://gitlab.freedesktop.org/drm/kernel

Pull drm fixes from Dave Airlie:
 "Regular fixes pull, nothing too out of the ordinary, the mediatek
  fixes came in a batch that I might have preferred a bit earlier but
  all seem fine, otherwise regular xe/amdgpu and a few misc ones.

  xe:
   - Fix missing HPD interrupt enabling, bringing one PM refactor with it
   - Workaround LNL GGTT invalidation not being visible to GuC
   - Avoid getting jobs stuck without a protecting timeout

  ivpu:
   - Fix firewall IRQ handling

  panthor:
   - Fix firmware initialization wrt page sizes
   - Fix handling and reporting of dead job groups

  sched:
   - Guarantee forward progress via WC_MEM_RECLAIM

  tests:
   - Fix memory leak in drm_display_mode_from_cea_vic()

  amdgpu:
   - DCN 3.5 fix
   - Vangogh SMU KASAN fix
   - SMU 13 profile reporting fix

  mediatek:
   - Fix degradation problem of alpha blending
   - Fix color format MACROs in OVL
   - Fix get efuse issue for MT8188 DPTX
   - Fix potential NULL dereference in mtk_crtc_destroy()
   - Correct dpi power-domains property
   - Add split subschema property constraints"

* tag 'drm-fixes-2024-11-02' of https://gitlab.freedesktop.org/drm/kernel: (27 commits)
  drm/xe: Don't short circuit TDR on jobs not started
  drm/xe: Add mmio read before GGTT invalidate
  drm/tests: hdmi: Fix memory leaks in drm_display_mode_from_cea_vic()
  drm/connector: hdmi: Fix memory leak in drm_display_mode_from_cea_vic()
  drm/tests: helpers: Add helper for drm_display_mode_from_cea_vic()
  drm/panthor: Report group as timedout when we fail to properly suspend
  drm/panthor: Fail job creation when the group is dead
  drm/panthor: Fix firmware initialization on systems with a page size > 4k
  accel/ivpu: Fix NOC firewall interrupt handling
  drm/xe/display: Add missing HPD interrupt enabling during non-d3cold RPM resume
  drm/xe/display: Separate the d3cold and non-d3cold runtime PM handling
  drm/xe: Remove runtime argument from display s/r functions
  drm/amdgpu/smu13: fix profile reporting
  drm/amd/pm: Vangogh: Fix kernel memory out of bounds write
  Revert "drm/amd/display: update DML2 policy EnhancedPrefetchScheduleAccelerationFinal DCN35"
  drm/sched: Mark scheduler work queues with WQ_MEM_RECLAIM
  drm/tegra: Fix NULL vs IS_ERR() check in probe()
  dt-bindings: display: mediatek: split: add subschema property constraints
  dt-bindings: display: mediatek: dpi: correct power-domains property
  drm/mediatek: Fix potential NULL dereference in mtk_crtc_destroy()
  ...
This commit is contained in:
Linus Torvalds 2024-11-01 15:37:09 -10:00
commit 269ce3bd62
36 changed files with 412 additions and 121 deletions

View File

@ -63,6 +63,16 @@ properties:
- const: sleep
power-domains:
description: |
The MediaTek DPI module is typically associated with one of the
following multimedia power domains:
POWER_DOMAIN_DISPLAY
POWER_DOMAIN_VDOSYS
POWER_DOMAIN_MM
The specific power domain used varies depending on the SoC design.
It is recommended to explicitly add the appropriate power domain
property to the DPI node in the device tree.
maxItems: 1
port:
@ -79,20 +89,6 @@ required:
- clock-names
- port
allOf:
- if:
not:
properties:
compatible:
contains:
enum:
- mediatek,mt6795-dpi
- mediatek,mt8173-dpi
- mediatek,mt8186-dpi
then:
properties:
power-domains: false
additionalProperties: false
examples:

View File

@ -38,6 +38,7 @@ properties:
description: A phandle and PM domain specifier as defined by bindings of
the power controller specified by phandle. See
Documentation/devicetree/bindings/power/power-domain.yaml for details.
maxItems: 1
mediatek,gce-client-reg:
description:
@ -57,6 +58,9 @@ properties:
clocks:
items:
- description: SPLIT Clock
- description: Used for interfacing with the HDMI RX signal source.
- description: Paired with receiving HDMI RX metadata.
minItems: 1
required:
- compatible
@ -72,9 +76,24 @@ allOf:
const: mediatek,mt8195-mdp3-split
then:
properties:
clocks:
minItems: 3
required:
- mediatek,gce-client-reg
- if:
properties:
compatible:
contains:
const: mediatek,mt8173-disp-split
then:
properties:
clocks:
maxItems: 1
additionalProperties: false
examples:

View File

@ -108,6 +108,14 @@ static int reset_pending_show(struct seq_file *s, void *v)
return 0;
}
static int firewall_irq_counter_show(struct seq_file *s, void *v)
{
struct ivpu_device *vdev = seq_to_ivpu(s);
seq_printf(s, "%d\n", atomic_read(&vdev->hw->firewall_irq_counter));
return 0;
}
static const struct drm_debugfs_info vdev_debugfs_list[] = {
{"bo_list", bo_list_show, 0},
{"fw_name", fw_name_show, 0},
@ -116,6 +124,7 @@ static const struct drm_debugfs_info vdev_debugfs_list[] = {
{"last_bootmode", last_bootmode_show, 0},
{"reset_counter", reset_counter_show, 0},
{"reset_pending", reset_pending_show, 0},
{"firewall_irq_counter", firewall_irq_counter_show, 0},
};
static ssize_t

View File

@ -249,6 +249,7 @@ int ivpu_hw_init(struct ivpu_device *vdev)
platform_init(vdev);
wa_init(vdev);
timeouts_init(vdev);
atomic_set(&vdev->hw->firewall_irq_counter, 0);
return 0;
}

View File

@ -52,6 +52,7 @@ struct ivpu_hw_info {
int dma_bits;
ktime_t d0i3_entry_host_ts;
u64 d0i3_entry_vpu_ts;
atomic_t firewall_irq_counter;
};
int ivpu_hw_init(struct ivpu_device *vdev);

View File

@ -1062,7 +1062,10 @@ static void irq_wdt_mss_handler(struct ivpu_device *vdev)
static void irq_noc_firewall_handler(struct ivpu_device *vdev)
{
ivpu_pm_trigger_recovery(vdev, "NOC Firewall IRQ");
atomic_inc(&vdev->hw->firewall_irq_counter);
ivpu_dbg(vdev, IRQ, "NOC Firewall interrupt detected, counter %d\n",
atomic_read(&vdev->hw->firewall_irq_counter));
}
/* Handler for IRQs from NPU core */

View File

@ -303,6 +303,7 @@ void build_unoptimized_policy_settings(enum dml_project_id project, struct dml_m
if (project == dml_project_dcn35 ||
project == dml_project_dcn351) {
policy->DCCProgrammingAssumesScanDirectionUnknownFinal = false;
policy->EnhancedPrefetchScheduleAccelerationFinal = 0;
policy->AllowForPStateChangeOrStutterInVBlankFinal = dml_prefetch_support_uclk_fclk_and_stutter_if_possible; /*new*/
policy->UseOnlyMaxPrefetchModes = 1;
}

View File

@ -242,7 +242,9 @@ static int vangogh_tables_init(struct smu_context *smu)
goto err0_out;
smu_table->metrics_time = 0;
smu_table->gpu_metrics_table_size = max(sizeof(struct gpu_metrics_v2_3), sizeof(struct gpu_metrics_v2_2));
smu_table->gpu_metrics_table_size = sizeof(struct gpu_metrics_v2_2);
smu_table->gpu_metrics_table_size = max(smu_table->gpu_metrics_table_size, sizeof(struct gpu_metrics_v2_3));
smu_table->gpu_metrics_table_size = max(smu_table->gpu_metrics_table_size, sizeof(struct gpu_metrics_v2_4));
smu_table->gpu_metrics_table = kzalloc(smu_table->gpu_metrics_table_size, GFP_KERNEL);
if (!smu_table->gpu_metrics_table)
goto err1_out;

View File

@ -2485,7 +2485,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
DpmActivityMonitorCoeffInt_t *activity_monitor =
&(activity_monitor_external.DpmActivityMonitorCoeffInt);
int workload_type, ret = 0;
u32 workload_mask;
u32 workload_mask, selected_workload_mask;
smu->power_profile_mode = input[size];
@ -2552,7 +2552,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
if (workload_type < 0)
return -EINVAL;
workload_mask = 1 << workload_type;
selected_workload_mask = workload_mask = 1 << workload_type;
/* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
@ -2572,7 +2572,7 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
workload_mask,
NULL);
if (!ret)
smu->workload_mask = workload_mask;
smu->workload_mask = selected_workload_mask;
return ret;
}

View File

@ -127,9 +127,8 @@ static void mtk_crtc_destroy(struct drm_crtc *crtc)
mtk_mutex_put(mtk_crtc->mutex);
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
cmdq_pkt_destroy(&mtk_crtc->cmdq_client, &mtk_crtc->cmdq_handle);
if (mtk_crtc->cmdq_client.chan) {
cmdq_pkt_destroy(&mtk_crtc->cmdq_client, &mtk_crtc->cmdq_handle);
mbox_free_channel(mtk_crtc->cmdq_client.chan);
mtk_crtc->cmdq_client.chan = NULL;
}
@ -913,6 +912,7 @@ static int mtk_crtc_init_comp_planes(struct drm_device *drm_dev,
BIT(pipe),
mtk_crtc_plane_type(mtk_crtc->layer_nr, num_planes),
mtk_ddp_comp_supported_rotations(comp),
mtk_ddp_comp_get_blend_modes(comp),
mtk_ddp_comp_get_formats(comp),
mtk_ddp_comp_get_num_formats(comp), i);
if (ret)

View File

@ -363,6 +363,7 @@ static const struct mtk_ddp_comp_funcs ddp_ovl = {
.layer_config = mtk_ovl_layer_config,
.bgclr_in_on = mtk_ovl_bgclr_in_on,
.bgclr_in_off = mtk_ovl_bgclr_in_off,
.get_blend_modes = mtk_ovl_get_blend_modes,
.get_formats = mtk_ovl_get_formats,
.get_num_formats = mtk_ovl_get_num_formats,
};
@ -416,6 +417,7 @@ static const struct mtk_ddp_comp_funcs ddp_ovl_adaptor = {
.disconnect = mtk_ovl_adaptor_disconnect,
.add = mtk_ovl_adaptor_add_comp,
.remove = mtk_ovl_adaptor_remove_comp,
.get_blend_modes = mtk_ovl_adaptor_get_blend_modes,
.get_formats = mtk_ovl_adaptor_get_formats,
.get_num_formats = mtk_ovl_adaptor_get_num_formats,
.mode_valid = mtk_ovl_adaptor_mode_valid,

View File

@ -80,6 +80,7 @@ struct mtk_ddp_comp_funcs {
void (*ctm_set)(struct device *dev,
struct drm_crtc_state *state);
struct device * (*dma_dev_get)(struct device *dev);
u32 (*get_blend_modes)(struct device *dev);
const u32 *(*get_formats)(struct device *dev);
size_t (*get_num_formats)(struct device *dev);
void (*connect)(struct device *dev, struct device *mmsys_dev, unsigned int next);
@ -266,6 +267,15 @@ static inline struct device *mtk_ddp_comp_dma_dev_get(struct mtk_ddp_comp *comp)
return comp->dev;
}
static inline
u32 mtk_ddp_comp_get_blend_modes(struct mtk_ddp_comp *comp)
{
if (comp->funcs && comp->funcs->get_blend_modes)
return comp->funcs->get_blend_modes(comp->dev);
return 0;
}
static inline
const u32 *mtk_ddp_comp_get_formats(struct mtk_ddp_comp *comp)
{

View File

@ -103,6 +103,7 @@ void mtk_ovl_register_vblank_cb(struct device *dev,
void mtk_ovl_unregister_vblank_cb(struct device *dev);
void mtk_ovl_enable_vblank(struct device *dev);
void mtk_ovl_disable_vblank(struct device *dev);
u32 mtk_ovl_get_blend_modes(struct device *dev);
const u32 *mtk_ovl_get_formats(struct device *dev);
size_t mtk_ovl_get_num_formats(struct device *dev);
@ -131,6 +132,7 @@ void mtk_ovl_adaptor_start(struct device *dev);
void mtk_ovl_adaptor_stop(struct device *dev);
unsigned int mtk_ovl_adaptor_layer_nr(struct device *dev);
struct device *mtk_ovl_adaptor_dma_dev_get(struct device *dev);
u32 mtk_ovl_adaptor_get_blend_modes(struct device *dev);
const u32 *mtk_ovl_adaptor_get_formats(struct device *dev);
size_t mtk_ovl_adaptor_get_num_formats(struct device *dev);
enum drm_mode_status mtk_ovl_adaptor_mode_valid(struct device *dev,

View File

@ -65,8 +65,8 @@
#define OVL_CON_CLRFMT_RGB (1 << 12)
#define OVL_CON_CLRFMT_ARGB8888 (2 << 12)
#define OVL_CON_CLRFMT_RGBA8888 (3 << 12)
#define OVL_CON_CLRFMT_ABGR8888 (OVL_CON_CLRFMT_RGBA8888 | OVL_CON_BYTE_SWAP)
#define OVL_CON_CLRFMT_BGRA8888 (OVL_CON_CLRFMT_ARGB8888 | OVL_CON_BYTE_SWAP)
#define OVL_CON_CLRFMT_ABGR8888 (OVL_CON_CLRFMT_ARGB8888 | OVL_CON_BYTE_SWAP)
#define OVL_CON_CLRFMT_BGRA8888 (OVL_CON_CLRFMT_RGBA8888 | OVL_CON_BYTE_SWAP)
#define OVL_CON_CLRFMT_UYVY (4 << 12)
#define OVL_CON_CLRFMT_YUYV (5 << 12)
#define OVL_CON_MTX_YUV_TO_RGB (6 << 16)
@ -146,6 +146,7 @@ struct mtk_disp_ovl_data {
bool fmt_rgb565_is_0;
bool smi_id_en;
bool supports_afbc;
const u32 blend_modes;
const u32 *formats;
size_t num_formats;
bool supports_clrfmt_ext;
@ -214,6 +215,13 @@ void mtk_ovl_disable_vblank(struct device *dev)
writel_relaxed(0x0, ovl->regs + DISP_REG_OVL_INTEN);
}
u32 mtk_ovl_get_blend_modes(struct device *dev)
{
struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
return ovl->data->blend_modes;
}
const u32 *mtk_ovl_get_formats(struct device *dev)
{
struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
@ -386,14 +394,27 @@ void mtk_ovl_layer_off(struct device *dev, unsigned int idx,
DISP_REG_OVL_RDMA_CTRL(idx));
}
static unsigned int ovl_fmt_convert(struct mtk_disp_ovl *ovl, unsigned int fmt,
unsigned int blend_mode)
static unsigned int mtk_ovl_fmt_convert(struct mtk_disp_ovl *ovl,
struct mtk_plane_state *state)
{
/* The return value in switch "MEM_MODE_INPUT_FORMAT_XXX"
* is defined in mediatek HW data sheet.
* The alphabet order in XXX is no relation to data
* arrangement in memory.
unsigned int fmt = state->pending.format;
unsigned int blend_mode = DRM_MODE_BLEND_COVERAGE;
/*
* For the platforms where OVL_CON_CLRFMT_MAN is defined in the hardware data sheet
* and supports premultiplied color formats, such as OVL_CON_CLRFMT_PARGB8888.
*
* Check blend_modes in the driver data to see if premultiplied mode is supported.
* If not, use coverage mode instead to set it to the supported color formats.
*
* Current DRM assumption is that alpha is default premultiplied, so the bitmask of
* blend_modes must include BIT(DRM_MODE_BLEND_PREMULTI). Otherwise, mtk_plane_init()
* will get an error return from drm_plane_create_blend_mode_property() and
* state->base.pixel_blend_mode should not be used.
*/
if (ovl->data->blend_modes & BIT(DRM_MODE_BLEND_PREMULTI))
blend_mode = state->base.pixel_blend_mode;
switch (fmt) {
default:
case DRM_FORMAT_RGB565:
@ -471,20 +492,26 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
return;
}
con = ovl_fmt_convert(ovl, fmt, blend_mode);
con = mtk_ovl_fmt_convert(ovl, state);
if (state->base.fb) {
con |= OVL_CON_AEN;
con |= state->base.alpha & OVL_CON_ALPHA;
}
/* CONST_BLD must be enabled for XRGB formats although the alpha channel
* can be ignored, or OVL will still read the value from memory.
* For RGB888 related formats, whether CONST_BLD is enabled or not won't
* affect the result. Therefore we use !has_alpha as the condition.
*/
if ((state->base.fb && !state->base.fb->format->has_alpha) ||
blend_mode == DRM_MODE_BLEND_PIXEL_NONE)
ignore_pixel_alpha = OVL_CONST_BLEND;
/*
* For blend_modes supported SoCs, always enable alpha blending.
* For blend_modes unsupported SoCs, enable alpha blending when has_alpha is set.
*/
if (blend_mode || state->base.fb->format->has_alpha)
con |= OVL_CON_AEN;
/*
* Although the alpha channel can be ignored, CONST_BLD must be enabled
* for XRGB format, otherwise OVL will still read the value from memory.
* For RGB888 related formats, whether CONST_BLD is enabled or not won't
* affect the result. Therefore we use !has_alpha as the condition.
*/
if (blend_mode == DRM_MODE_BLEND_PIXEL_NONE || !state->base.fb->format->has_alpha)
ignore_pixel_alpha = OVL_CONST_BLEND;
}
if (pending->rotation & DRM_MODE_REFLECT_Y) {
con |= OVL_CON_VIRT_FLIP;
@ -663,6 +690,9 @@ static const struct mtk_disp_ovl_data mt8192_ovl_driver_data = {
.layer_nr = 4,
.fmt_rgb565_is_0 = true,
.smi_id_en = true,
.blend_modes = BIT(DRM_MODE_BLEND_PREMULTI) |
BIT(DRM_MODE_BLEND_COVERAGE) |
BIT(DRM_MODE_BLEND_PIXEL_NONE),
.formats = mt8173_formats,
.num_formats = ARRAY_SIZE(mt8173_formats),
};
@ -673,6 +703,9 @@ static const struct mtk_disp_ovl_data mt8192_ovl_2l_driver_data = {
.layer_nr = 2,
.fmt_rgb565_is_0 = true,
.smi_id_en = true,
.blend_modes = BIT(DRM_MODE_BLEND_PREMULTI) |
BIT(DRM_MODE_BLEND_COVERAGE) |
BIT(DRM_MODE_BLEND_PIXEL_NONE),
.formats = mt8173_formats,
.num_formats = ARRAY_SIZE(mt8173_formats),
};
@ -684,6 +717,9 @@ static const struct mtk_disp_ovl_data mt8195_ovl_driver_data = {
.fmt_rgb565_is_0 = true,
.smi_id_en = true,
.supports_afbc = true,
.blend_modes = BIT(DRM_MODE_BLEND_PREMULTI) |
BIT(DRM_MODE_BLEND_COVERAGE) |
BIT(DRM_MODE_BLEND_PIXEL_NONE),
.formats = mt8195_formats,
.num_formats = ARRAY_SIZE(mt8195_formats),
.supports_clrfmt_ext = true,

View File

@ -400,6 +400,13 @@ void mtk_ovl_adaptor_disable_vblank(struct device *dev)
mtk_ethdr_disable_vblank(ovl_adaptor->ovl_adaptor_comp[OVL_ADAPTOR_ETHDR0]);
}
u32 mtk_ovl_adaptor_get_blend_modes(struct device *dev)
{
struct mtk_disp_ovl_adaptor *ovl_adaptor = dev_get_drvdata(dev);
return mtk_ethdr_get_blend_modes(ovl_adaptor->ovl_adaptor_comp[OVL_ADAPTOR_ETHDR0]);
}
const u32 *mtk_ovl_adaptor_get_formats(struct device *dev)
{
struct mtk_disp_ovl_adaptor *ovl_adaptor = dev_get_drvdata(dev);

View File

@ -145,6 +145,89 @@ struct mtk_dp_data {
u16 audio_m_div2_bit;
};
static const struct mtk_dp_efuse_fmt mt8188_dp_efuse_fmt[MTK_DP_CAL_MAX] = {
[MTK_DP_CAL_GLB_BIAS_TRIM] = {
.idx = 0,
.shift = 10,
.mask = 0x1f,
.min_val = 1,
.max_val = 0x1e,
.default_val = 0xf,
},
[MTK_DP_CAL_CLKTX_IMPSE] = {
.idx = 0,
.shift = 15,
.mask = 0xf,
.min_val = 1,
.max_val = 0xe,
.default_val = 0x8,
},
[MTK_DP_CAL_LN_TX_IMPSEL_PMOS_0] = {
.idx = 1,
.shift = 0,
.mask = 0xf,
.min_val = 1,
.max_val = 0xe,
.default_val = 0x8,
},
[MTK_DP_CAL_LN_TX_IMPSEL_PMOS_1] = {
.idx = 1,
.shift = 8,
.mask = 0xf,
.min_val = 1,
.max_val = 0xe,
.default_val = 0x8,
},
[MTK_DP_CAL_LN_TX_IMPSEL_PMOS_2] = {
.idx = 1,
.shift = 16,
.mask = 0xf,
.min_val = 1,
.max_val = 0xe,
.default_val = 0x8,
},
[MTK_DP_CAL_LN_TX_IMPSEL_PMOS_3] = {
.idx = 1,
.shift = 24,
.mask = 0xf,
.min_val = 1,
.max_val = 0xe,
.default_val = 0x8,
},
[MTK_DP_CAL_LN_TX_IMPSEL_NMOS_0] = {
.idx = 1,
.shift = 4,
.mask = 0xf,
.min_val = 1,
.max_val = 0xe,
.default_val = 0x8,
},
[MTK_DP_CAL_LN_TX_IMPSEL_NMOS_1] = {
.idx = 1,
.shift = 12,
.mask = 0xf,
.min_val = 1,
.max_val = 0xe,
.default_val = 0x8,
},
[MTK_DP_CAL_LN_TX_IMPSEL_NMOS_2] = {
.idx = 1,
.shift = 20,
.mask = 0xf,
.min_val = 1,
.max_val = 0xe,
.default_val = 0x8,
},
[MTK_DP_CAL_LN_TX_IMPSEL_NMOS_3] = {
.idx = 1,
.shift = 28,
.mask = 0xf,
.min_val = 1,
.max_val = 0xe,
.default_val = 0x8,
},
};
static const struct mtk_dp_efuse_fmt mt8195_edp_efuse_fmt[MTK_DP_CAL_MAX] = {
[MTK_DP_CAL_GLB_BIAS_TRIM] = {
.idx = 3,
@ -2771,7 +2854,7 @@ static SIMPLE_DEV_PM_OPS(mtk_dp_pm_ops, mtk_dp_suspend, mtk_dp_resume);
static const struct mtk_dp_data mt8188_dp_data = {
.bridge_type = DRM_MODE_CONNECTOR_DisplayPort,
.smc_cmd = MTK_DP_SIP_ATF_VIDEO_UNMUTE,
.efuse_fmt = mt8195_dp_efuse_fmt,
.efuse_fmt = mt8188_dp_efuse_fmt,
.audio_supported = true,
.audio_pkt_in_hblank_area = true,
.audio_m_div2_bit = MT8188_AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_DIV_2,

View File

@ -145,6 +145,13 @@ static irqreturn_t mtk_ethdr_irq_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}
u32 mtk_ethdr_get_blend_modes(struct device *dev)
{
return BIT(DRM_MODE_BLEND_PREMULTI) |
BIT(DRM_MODE_BLEND_COVERAGE) |
BIT(DRM_MODE_BLEND_PIXEL_NONE);
}
void mtk_ethdr_layer_config(struct device *dev, unsigned int idx,
struct mtk_plane_state *state,
struct cmdq_pkt *cmdq_pkt)

View File

@ -13,6 +13,7 @@ void mtk_ethdr_clk_disable(struct device *dev);
void mtk_ethdr_config(struct device *dev, unsigned int w,
unsigned int h, unsigned int vrefresh,
unsigned int bpc, struct cmdq_pkt *cmdq_pkt);
u32 mtk_ethdr_get_blend_modes(struct device *dev);
void mtk_ethdr_layer_config(struct device *dev, unsigned int idx,
struct mtk_plane_state *state,
struct cmdq_pkt *cmdq_pkt);

View File

@ -320,8 +320,8 @@ static const struct drm_plane_helper_funcs mtk_plane_helper_funcs = {
int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
unsigned long possible_crtcs, enum drm_plane_type type,
unsigned int supported_rotations, const u32 *formats,
size_t num_formats, unsigned int plane_idx)
unsigned int supported_rotations, const u32 blend_modes,
const u32 *formats, size_t num_formats, unsigned int plane_idx)
{
int err;
@ -366,12 +366,11 @@ int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
if (err)
DRM_ERROR("failed to create property: alpha\n");
err = drm_plane_create_blend_mode_property(plane,
BIT(DRM_MODE_BLEND_PREMULTI) |
BIT(DRM_MODE_BLEND_COVERAGE) |
BIT(DRM_MODE_BLEND_PIXEL_NONE));
if (err)
DRM_ERROR("failed to create property: blend_mode\n");
if (blend_modes) {
err = drm_plane_create_blend_mode_property(plane, blend_modes);
if (err)
DRM_ERROR("failed to create property: blend_mode\n");
}
drm_plane_helper_add(plane, &mtk_plane_helper_funcs);

View File

@ -48,6 +48,6 @@ to_mtk_plane_state(struct drm_plane_state *state)
int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
unsigned long possible_crtcs, enum drm_plane_type type,
unsigned int supported_rotations, const u32 *formats,
size_t num_formats, unsigned int plane_idx);
unsigned int supported_rotations, const u32 blend_modes,
const u32 *formats, size_t num_formats, unsigned int plane_idx);
#endif

View File

@ -487,6 +487,7 @@ static int panthor_fw_load_section_entry(struct panthor_device *ptdev,
struct panthor_fw_binary_iter *iter,
u32 ehdr)
{
ssize_t vm_pgsz = panthor_vm_page_size(ptdev->fw->vm);
struct panthor_fw_binary_section_entry_hdr hdr;
struct panthor_fw_section *section;
u32 section_size;
@ -515,8 +516,7 @@ static int panthor_fw_load_section_entry(struct panthor_device *ptdev,
return -EINVAL;
}
if ((hdr.va.start & ~PAGE_MASK) != 0 ||
(hdr.va.end & ~PAGE_MASK) != 0) {
if (!IS_ALIGNED(hdr.va.start, vm_pgsz) || !IS_ALIGNED(hdr.va.end, vm_pgsz)) {
drm_err(&ptdev->base, "Firmware corrupted, virtual addresses not page aligned: 0x%x-0x%x\n",
hdr.va.start, hdr.va.end);
return -EINVAL;

View File

@ -44,8 +44,7 @@ void panthor_kernel_bo_destroy(struct panthor_kernel_bo *bo)
to_panthor_bo(bo->obj)->exclusive_vm_root_gem != panthor_vm_root_gem(vm)))
goto out_free_bo;
ret = panthor_vm_unmap_range(vm, bo->va_node.start,
panthor_kernel_bo_size(bo));
ret = panthor_vm_unmap_range(vm, bo->va_node.start, bo->va_node.size);
if (ret)
goto out_free_bo;
@ -95,10 +94,16 @@ panthor_kernel_bo_create(struct panthor_device *ptdev, struct panthor_vm *vm,
}
bo = to_panthor_bo(&obj->base);
size = obj->base.size;
kbo->obj = &obj->base;
bo->flags = bo_flags;
/* The system and GPU MMU page size might differ, which becomes a
* problem for FW sections that need to be mapped at explicit address
* since our PAGE_SIZE alignment might cover a VA range that's
* expected to be used for another section.
* Make sure we never map more than we need.
*/
size = ALIGN(size, panthor_vm_page_size(vm));
ret = panthor_vm_alloc_va(vm, gpu_va, size, &kbo->va_node);
if (ret)
goto err_put_obj;

View File

@ -826,6 +826,14 @@ void panthor_vm_idle(struct panthor_vm *vm)
mutex_unlock(&ptdev->mmu->as.slots_lock);
}
u32 panthor_vm_page_size(struct panthor_vm *vm)
{
const struct io_pgtable *pgt = io_pgtable_ops_to_pgtable(vm->pgtbl_ops);
u32 pg_shift = ffs(pgt->cfg.pgsize_bitmap) - 1;
return 1u << pg_shift;
}
static void panthor_vm_stop(struct panthor_vm *vm)
{
drm_sched_stop(&vm->sched, NULL);
@ -1025,12 +1033,13 @@ int
panthor_vm_alloc_va(struct panthor_vm *vm, u64 va, u64 size,
struct drm_mm_node *va_node)
{
ssize_t vm_pgsz = panthor_vm_page_size(vm);
int ret;
if (!size || (size & ~PAGE_MASK))
if (!size || !IS_ALIGNED(size, vm_pgsz))
return -EINVAL;
if (va != PANTHOR_VM_KERNEL_AUTO_VA && (va & ~PAGE_MASK))
if (va != PANTHOR_VM_KERNEL_AUTO_VA && !IS_ALIGNED(va, vm_pgsz))
return -EINVAL;
mutex_lock(&vm->mm_lock);
@ -2366,11 +2375,12 @@ panthor_vm_bind_prepare_op_ctx(struct drm_file *file,
const struct drm_panthor_vm_bind_op *op,
struct panthor_vm_op_ctx *op_ctx)
{
ssize_t vm_pgsz = panthor_vm_page_size(vm);
struct drm_gem_object *gem;
int ret;
/* Aligned on page size. */
if ((op->va | op->size) & ~PAGE_MASK)
if (!IS_ALIGNED(op->va | op->size, vm_pgsz))
return -EINVAL;
switch (op->flags & DRM_PANTHOR_VM_BIND_OP_TYPE_MASK) {

View File

@ -30,6 +30,7 @@ panthor_vm_get_bo_for_va(struct panthor_vm *vm, u64 va, u64 *bo_offset);
int panthor_vm_active(struct panthor_vm *vm);
void panthor_vm_idle(struct panthor_vm *vm);
u32 panthor_vm_page_size(struct panthor_vm *vm);
int panthor_vm_as(struct panthor_vm *vm);
int panthor_vm_flush_all(struct panthor_vm *vm);

View File

@ -589,10 +589,11 @@ struct panthor_group {
* @timedout: True when a timeout occurred on any of the queues owned by
* this group.
*
* Timeouts can be reported by drm_sched or by the FW. In any case, any
* timeout situation is unrecoverable, and the group becomes useless.
* We simply wait for all references to be dropped so we can release the
* group object.
* Timeouts can be reported by drm_sched or by the FW. If a reset is required,
* and the group can't be suspended, this also leads to a timeout. In any case,
* any timeout situation is unrecoverable, and the group becomes useless. We
* simply wait for all references to be dropped so we can release the group
* object.
*/
bool timedout;
@ -2640,6 +2641,12 @@ void panthor_sched_suspend(struct panthor_device *ptdev)
csgs_upd_ctx_init(&upd_ctx);
while (slot_mask) {
u32 csg_id = ffs(slot_mask) - 1;
struct panthor_csg_slot *csg_slot = &sched->csg_slots[csg_id];
/* We consider group suspension failures as fatal and flag the
* group as unusable by setting timedout=true.
*/
csg_slot->group->timedout = true;
csgs_upd_ctx_queue_reqs(ptdev, &upd_ctx, csg_id,
CSG_STATE_TERMINATE,
@ -3409,6 +3416,11 @@ panthor_job_create(struct panthor_file *pfile,
goto err_put_job;
}
if (!group_can_run(job->group)) {
ret = -EINVAL;
goto err_put_job;
}
if (job->queue_idx >= job->group->queue_count ||
!job->group->queues[job->queue_idx]) {
ret = -EINVAL;

View File

@ -1276,10 +1276,11 @@ int drm_sched_init(struct drm_gpu_scheduler *sched,
sched->own_submit_wq = false;
} else {
#ifdef CONFIG_LOCKDEP
sched->submit_wq = alloc_ordered_workqueue_lockdep_map(name, 0,
sched->submit_wq = alloc_ordered_workqueue_lockdep_map(name,
WQ_MEM_RECLAIM,
&drm_sched_lockdep_map);
#else
sched->submit_wq = alloc_ordered_workqueue(name, 0);
sched->submit_wq = alloc_ordered_workqueue(name, WQ_MEM_RECLAIM);
#endif
if (!sched->submit_wq)
return -ENOMEM;

View File

@ -1153,8 +1153,8 @@ static int host1x_drm_probe(struct host1x_device *dev)
if (host1x_drm_wants_iommu(dev) && device_iommu_mapped(dma_dev)) {
tegra->domain = iommu_paging_domain_alloc(dma_dev);
if (!tegra->domain) {
err = -ENOMEM;
if (IS_ERR(tegra->domain)) {
err = PTR_ERR(tegra->domain);
goto free;
}

View File

@ -996,7 +996,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb(struct kunit *test)
unsigned long long rate;
struct drm_device *drm = &priv->drm;
mode = drm_display_mode_from_cea_vic(drm, 16);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
KUNIT_ASSERT_NOT_NULL(test, mode);
KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
@ -1017,7 +1017,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc(struct kunit *test)
unsigned long long rate;
struct drm_device *drm = &priv->drm;
mode = drm_display_mode_from_cea_vic(drm, 16);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
KUNIT_ASSERT_NOT_NULL(test, mode);
KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
@ -1038,7 +1038,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb_10bpc_vic_1(struct kunit *t
unsigned long long rate;
struct drm_device *drm = &priv->drm;
mode = drm_display_mode_from_cea_vic(drm, 1);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
KUNIT_ASSERT_NOT_NULL(test, mode);
rate = drm_hdmi_compute_mode_clock(mode, 10, HDMI_COLORSPACE_RGB);
@ -1056,7 +1056,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc(struct kunit *test)
unsigned long long rate;
struct drm_device *drm = &priv->drm;
mode = drm_display_mode_from_cea_vic(drm, 16);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
KUNIT_ASSERT_NOT_NULL(test, mode);
KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
@ -1077,7 +1077,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb_12bpc_vic_1(struct kunit *t
unsigned long long rate;
struct drm_device *drm = &priv->drm;
mode = drm_display_mode_from_cea_vic(drm, 1);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
KUNIT_ASSERT_NOT_NULL(test, mode);
rate = drm_hdmi_compute_mode_clock(mode, 12, HDMI_COLORSPACE_RGB);
@ -1095,7 +1095,7 @@ static void drm_test_drm_hdmi_compute_mode_clock_rgb_double(struct kunit *test)
unsigned long long rate;
struct drm_device *drm = &priv->drm;
mode = drm_display_mode_from_cea_vic(drm, 6);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 6);
KUNIT_ASSERT_NOT_NULL(test, mode);
KUNIT_ASSERT_TRUE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
@ -1118,7 +1118,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv420_valid(struct kunit
unsigned long long rate;
unsigned int vic = *(unsigned int *)test->param_value;
mode = drm_display_mode_from_cea_vic(drm, vic);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, vic);
KUNIT_ASSERT_NOT_NULL(test, mode);
KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
@ -1155,7 +1155,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv420_10_bpc(struct kuni
drm_hdmi_compute_mode_clock_yuv420_vic_valid_tests[0];
unsigned long long rate;
mode = drm_display_mode_from_cea_vic(drm, vic);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, vic);
KUNIT_ASSERT_NOT_NULL(test, mode);
KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
@ -1180,7 +1180,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv420_12_bpc(struct kuni
drm_hdmi_compute_mode_clock_yuv420_vic_valid_tests[0];
unsigned long long rate;
mode = drm_display_mode_from_cea_vic(drm, vic);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, vic);
KUNIT_ASSERT_NOT_NULL(test, mode);
KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
@ -1203,7 +1203,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv422_8_bpc(struct kunit
struct drm_device *drm = &priv->drm;
unsigned long long rate;
mode = drm_display_mode_from_cea_vic(drm, 16);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
KUNIT_ASSERT_NOT_NULL(test, mode);
KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
@ -1225,7 +1225,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv422_10_bpc(struct kuni
struct drm_device *drm = &priv->drm;
unsigned long long rate;
mode = drm_display_mode_from_cea_vic(drm, 16);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
KUNIT_ASSERT_NOT_NULL(test, mode);
KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);
@ -1247,7 +1247,7 @@ static void drm_test_connector_hdmi_compute_mode_clock_yuv422_12_bpc(struct kuni
struct drm_device *drm = &priv->drm;
unsigned long long rate;
mode = drm_display_mode_from_cea_vic(drm, 16);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 16);
KUNIT_ASSERT_NOT_NULL(test, mode);
KUNIT_ASSERT_FALSE(test, mode->flags & DRM_MODE_FLAG_DBLCLK);

View File

@ -441,7 +441,7 @@ static void drm_test_check_broadcast_rgb_auto_cea_mode_vic_1(struct kunit *test)
ctx = drm_kunit_helper_acquire_ctx_alloc(test);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
mode = drm_display_mode_from_cea_vic(drm, 1);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
KUNIT_ASSERT_NOT_NULL(test, mode);
drm = &priv->drm;
@ -555,7 +555,7 @@ static void drm_test_check_broadcast_rgb_full_cea_mode_vic_1(struct kunit *test)
ctx = drm_kunit_helper_acquire_ctx_alloc(test);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
mode = drm_display_mode_from_cea_vic(drm, 1);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
KUNIT_ASSERT_NOT_NULL(test, mode);
drm = &priv->drm;
@ -671,7 +671,7 @@ static void drm_test_check_broadcast_rgb_limited_cea_mode_vic_1(struct kunit *te
ctx = drm_kunit_helper_acquire_ctx_alloc(test);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
mode = drm_display_mode_from_cea_vic(drm, 1);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
KUNIT_ASSERT_NOT_NULL(test, mode);
drm = &priv->drm;
@ -1263,7 +1263,7 @@ static void drm_test_check_output_bpc_format_vic_1(struct kunit *test)
ctx = drm_kunit_helper_acquire_ctx_alloc(test);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
mode = drm_display_mode_from_cea_vic(drm, 1);
mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
KUNIT_ASSERT_NOT_NULL(test, mode);
/*

View File

@ -3,6 +3,7 @@
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_drv.h>
#include <drm/drm_edid.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_kunit_helpers.h>
#include <drm/drm_managed.h>
@ -311,6 +312,47 @@ drm_kunit_helper_create_crtc(struct kunit *test,
}
EXPORT_SYMBOL_GPL(drm_kunit_helper_create_crtc);
static void kunit_action_drm_mode_destroy(void *ptr)
{
struct drm_display_mode *mode = ptr;
drm_mode_destroy(NULL, mode);
}
/**
* drm_kunit_display_mode_from_cea_vic() - return a mode for CEA VIC
for a KUnit test
* @test: The test context object
* @dev: DRM device
* @video_code: CEA VIC of the mode
*
* Creates a new mode matching the specified CEA VIC for a KUnit test.
*
* Resources will be cleaned up automatically.
*
* Returns: A new drm_display_mode on success or NULL on failure
*/
struct drm_display_mode *
drm_kunit_display_mode_from_cea_vic(struct kunit *test, struct drm_device *dev,
u8 video_code)
{
struct drm_display_mode *mode;
int ret;
mode = drm_display_mode_from_cea_vic(dev, video_code);
if (!mode)
return NULL;
ret = kunit_add_action_or_reset(test,
kunit_action_drm_mode_destroy,
mode);
if (ret)
return NULL;
return mode;
}
EXPORT_SYMBOL_GPL(drm_kunit_display_mode_from_cea_vic);
MODULE_AUTHOR("Maxime Ripard <maxime@cerno.tech>");
MODULE_DESCRIPTION("KUnit test suite helper functions");
MODULE_LICENSE("GPL");

View File

@ -309,18 +309,7 @@ static void xe_display_flush_cleanup_work(struct xe_device *xe)
}
/* TODO: System and runtime suspend/resume sequences will be sanitized as a follow-up. */
void xe_display_pm_runtime_suspend(struct xe_device *xe)
{
if (!xe->info.probe_display)
return;
if (xe->d3cold.allowed)
xe_display_pm_suspend(xe, true);
intel_hpd_poll_enable(xe);
}
void xe_display_pm_suspend(struct xe_device *xe, bool runtime)
static void __xe_display_pm_suspend(struct xe_device *xe, bool runtime)
{
struct intel_display *display = &xe->display;
bool s2idle = suspend_to_idle();
@ -353,6 +342,27 @@ void xe_display_pm_suspend(struct xe_device *xe, bool runtime)
intel_opregion_suspend(display, s2idle ? PCI_D1 : PCI_D3cold);
intel_dmc_suspend(xe);
if (runtime && has_display(xe))
intel_hpd_poll_enable(xe);
}
void xe_display_pm_suspend(struct xe_device *xe)
{
__xe_display_pm_suspend(xe, false);
}
void xe_display_pm_runtime_suspend(struct xe_device *xe)
{
if (!xe->info.probe_display)
return;
if (xe->d3cold.allowed) {
__xe_display_pm_suspend(xe, true);
return;
}
intel_hpd_poll_enable(xe);
}
void xe_display_pm_suspend_late(struct xe_device *xe)
@ -366,17 +376,6 @@ void xe_display_pm_suspend_late(struct xe_device *xe)
intel_display_power_suspend_late(xe);
}
void xe_display_pm_runtime_resume(struct xe_device *xe)
{
if (!xe->info.probe_display)
return;
intel_hpd_poll_disable(xe);
if (xe->d3cold.allowed)
xe_display_pm_resume(xe, true);
}
void xe_display_pm_resume_early(struct xe_device *xe)
{
if (!xe->info.probe_display)
@ -387,7 +386,7 @@ void xe_display_pm_resume_early(struct xe_device *xe)
intel_power_domains_resume(xe);
}
void xe_display_pm_resume(struct xe_device *xe, bool runtime)
static void __xe_display_pm_resume(struct xe_device *xe, bool runtime)
{
struct intel_display *display = &xe->display;
@ -411,9 +410,11 @@ void xe_display_pm_resume(struct xe_device *xe, bool runtime)
intel_display_driver_resume(xe);
drm_kms_helper_poll_enable(&xe->drm);
intel_display_driver_enable_user_access(xe);
intel_hpd_poll_disable(xe);
}
if (has_display(xe))
intel_hpd_poll_disable(xe);
intel_opregion_resume(display);
intel_fbdev_set_suspend(&xe->drm, FBINFO_STATE_RUNNING, false);
@ -421,6 +422,26 @@ void xe_display_pm_resume(struct xe_device *xe, bool runtime)
intel_power_domains_enable(xe);
}
void xe_display_pm_resume(struct xe_device *xe)
{
__xe_display_pm_resume(xe, false);
}
void xe_display_pm_runtime_resume(struct xe_device *xe)
{
if (!xe->info.probe_display)
return;
if (xe->d3cold.allowed) {
__xe_display_pm_resume(xe, true);
return;
}
intel_hpd_init(xe);
intel_hpd_poll_disable(xe);
}
static void display_device_remove(struct drm_device *dev, void *arg)
{
struct xe_device *xe = arg;

View File

@ -34,10 +34,10 @@ void xe_display_irq_enable(struct xe_device *xe, u32 gu_misc_iir);
void xe_display_irq_reset(struct xe_device *xe);
void xe_display_irq_postinstall(struct xe_device *xe, struct xe_gt *gt);
void xe_display_pm_suspend(struct xe_device *xe, bool runtime);
void xe_display_pm_suspend(struct xe_device *xe);
void xe_display_pm_suspend_late(struct xe_device *xe);
void xe_display_pm_resume_early(struct xe_device *xe);
void xe_display_pm_resume(struct xe_device *xe, bool runtime);
void xe_display_pm_resume(struct xe_device *xe);
void xe_display_pm_runtime_suspend(struct xe_device *xe);
void xe_display_pm_runtime_resume(struct xe_device *xe);
@ -65,10 +65,10 @@ static inline void xe_display_irq_enable(struct xe_device *xe, u32 gu_misc_iir)
static inline void xe_display_irq_reset(struct xe_device *xe) {}
static inline void xe_display_irq_postinstall(struct xe_device *xe, struct xe_gt *gt) {}
static inline void xe_display_pm_suspend(struct xe_device *xe, bool runtime) {}
static inline void xe_display_pm_suspend(struct xe_device *xe) {}
static inline void xe_display_pm_suspend_late(struct xe_device *xe) {}
static inline void xe_display_pm_resume_early(struct xe_device *xe) {}
static inline void xe_display_pm_resume(struct xe_device *xe, bool runtime) {}
static inline void xe_display_pm_resume(struct xe_device *xe) {}
static inline void xe_display_pm_runtime_suspend(struct xe_device *xe) {}
static inline void xe_display_pm_runtime_resume(struct xe_device *xe) {}

View File

@ -397,6 +397,16 @@ static void ggtt_invalidate_gt_tlb(struct xe_gt *gt)
static void xe_ggtt_invalidate(struct xe_ggtt *ggtt)
{
struct xe_device *xe = tile_to_xe(ggtt->tile);
/*
* XXX: Barrier for GGTT pages. Unsure exactly why this required but
* without this LNL is having issues with the GuC reading scratch page
* vs. correct GGTT page. Not particularly a hot code path so blindly
* do a mmio read here which results in GuC reading correct GGTT page.
*/
xe_mmio_read32(xe_root_mmio_gt(xe), VF_CAP_REG);
/* Each GT in a tile has its own TLB to cache GGTT lookups */
ggtt_invalidate_gt_tlb(ggtt->tile->primary_gt);
ggtt_invalidate_gt_tlb(ggtt->tile->media_gt);

View File

@ -916,12 +916,22 @@ static void xe_guc_exec_queue_lr_cleanup(struct work_struct *w)
static bool check_timeout(struct xe_exec_queue *q, struct xe_sched_job *job)
{
struct xe_gt *gt = guc_to_gt(exec_queue_to_guc(q));
u32 ctx_timestamp = xe_lrc_ctx_timestamp(q->lrc[0]);
u32 ctx_job_timestamp = xe_lrc_ctx_job_timestamp(q->lrc[0]);
u32 ctx_timestamp, ctx_job_timestamp;
u32 timeout_ms = q->sched_props.job_timeout_ms;
u32 diff;
u64 running_time_ms;
if (!xe_sched_job_started(job)) {
xe_gt_warn(gt, "Check job timeout: seqno=%u, lrc_seqno=%u, guc_id=%d, not started",
xe_sched_job_seqno(job), xe_sched_job_lrc_seqno(job),
q->guc->id);
return xe_sched_invalidate_job(job, 2);
}
ctx_timestamp = xe_lrc_ctx_timestamp(q->lrc[0]);
ctx_job_timestamp = xe_lrc_ctx_job_timestamp(q->lrc[0]);
/*
* Counter wraps at ~223s at the usual 19.2MHz, be paranoid catch
* possible overflows with a high timeout.
@ -1049,10 +1059,6 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
exec_queue_killed_or_banned_or_wedged(q) ||
exec_queue_destroyed(q);
/* Job hasn't started, can't be timed out */
if (!skip_timeout_check && !xe_sched_job_started(job))
goto rearm;
/*
* XXX: Sampling timeout doesn't work in wedged mode as we have to
* modify scheduling state to read timestamp. We could read the

View File

@ -123,7 +123,7 @@ int xe_pm_suspend(struct xe_device *xe)
for_each_gt(gt, xe, id)
xe_gt_suspend_prepare(gt);
xe_display_pm_suspend(xe, false);
xe_display_pm_suspend(xe);
/* FIXME: Super racey... */
err = xe_bo_evict_all(xe);
@ -133,7 +133,7 @@ int xe_pm_suspend(struct xe_device *xe)
for_each_gt(gt, xe, id) {
err = xe_gt_suspend(gt);
if (err) {
xe_display_pm_resume(xe, false);
xe_display_pm_resume(xe);
goto err;
}
}
@ -187,7 +187,7 @@ int xe_pm_resume(struct xe_device *xe)
for_each_gt(gt, xe, id)
xe_gt_resume(gt);
xe_display_pm_resume(xe, false);
xe_display_pm_resume(xe);
err = xe_bo_restore_user(xe);
if (err)

View File

@ -120,4 +120,8 @@ drm_kunit_helper_create_crtc(struct kunit *test,
const struct drm_crtc_funcs *funcs,
const struct drm_crtc_helper_funcs *helper_funcs);
struct drm_display_mode *
drm_kunit_display_mode_from_cea_vic(struct kunit *test, struct drm_device *dev,
u8 video_code);
#endif // DRM_KUNIT_HELPERS_H_