Merge tag 'drm-misc-next-2022-09-09' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v6.1-rc1:

[airlied - fix sun4i_tv build]

UAPI Changes:
- Hide unregistered connectors from GETCONNECTOR ioctl.
- drm/virtio no longer advertises LINEAR modifier, as it doesn't work.
-

Cross-subsystem Changes:
- Fix GPF in udmabuf failure path.

Core Changes:
- Rework TTM placement to use intersect/compatible functions.
- Drop legacy DP-MST support.
- More DP-MST related fixes, and move all state into atomic.
- Make DRM_MIPI_DBI select DRM_KMS_HELPER.
- Add audio_infoframe packing for DP.
- Add logging when some atomic check functions fail.
- Assorted documentation updates and fixes.

Driver Changes:
- Assorted cleanups and fixes in msm, lcdif, nouveau, virtio,
  panel/ilitek, bridge/icn6211, tve200, gma500, bridge/*, panfrost, via,
  bochs, qxl, sun4i.
- Add add AUO B133UAN02.1, IVO M133NW4J-R3, Innolux N120ACA-EA1 eDP panels.
- Improve DP-MST modeset state handling in amdgpu, nouveau, i915.
- Drop DP-MST from radeon driver, it was broken and only user of legacy
  DP-MST.
- Handle unplugging better in vc4.
- Simplify drm cmdparser tests.
- Add DP support to ti-sn65dsi86.
- Add MT8195 DP support to mediatek.
- Support RGB565, XRGB64, and ARGB64 formats in vkms.
- Convert sun4i tv support to atomic.
- Refactor vc4/vec TV Modesetting, and fix timings.
- Use atomic helpers instead of simple display helpers in ssd130x.

Maintainer changes:
- Add Douglas Anderson as reviewer for panel-edp.

Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/a489485b-3ebc-c734-0f80-aed963d89efe@linux.intel.com
This commit is contained in:
Dave Airlie 2022-09-11 21:46:57 +10:00
commit fb34d8a04e
106 changed files with 6065 additions and 3091 deletions

View File

@ -24,6 +24,15 @@ properties:
maxItems: 1
description: virtual channel number of a DSI peripheral
clock-names:
const: refclk
clocks:
maxItems: 1
description: |
Optional external clock connected to REF_CLK input.
The clock rate must be in 10..154 MHz range.
enable-gpios:
description: Bridge EN pin, chip is reset when EN is low.

View File

@ -14,6 +14,19 @@ properties:
compatible:
const: chrontel,ch7033
chrontel,byteswap:
$ref: /schemas/types.yaml#/definitions/uint8
enum:
- 0 # BYTE_SWAP_RGB
- 1 # BYTE_SWAP_RBG
- 2 # BYTE_SWAP_GRB
- 3 # BYTE_SWAP_GBR
- 4 # BYTE_SWAP_BRG
- 5 # BYTE_SWAP_BGR
description: |
Set the byteswap value of the bridge. This is optional and if not
set value of BYTE_SWAP_BGR is used.
reg:
maxItems: 1
description: I2C address of the device

View File

@ -0,0 +1,116 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/display/mediatek/mediatek,dp.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: MediaTek Display Port Controller
maintainers:
- Chun-Kuang Hu <chunkuang.hu@kernel.org>
- Jitao shi <jitao.shi@mediatek.com>
description: |
MediaTek DP and eDP are different hardwares and there are some features
which are not supported for eDP. For example, audio is not supported for
eDP. Therefore, we need to use two different compatibles to describe them.
In addition, We just need to enable the power domain of DP, so the clock
of DP is generated by itself and we are not using other PLL to generate
clocks.
properties:
compatible:
enum:
- mediatek,mt8195-dp-tx
- mediatek,mt8195-edp-tx
reg:
maxItems: 1
nvmem-cells:
maxItems: 1
description: efuse data for display port calibration
nvmem-cell-names:
const: dp_calibration_data
power-domains:
maxItems: 1
interrupts:
maxItems: 1
ports:
$ref: /schemas/graph.yaml#/properties/ports
properties:
port@0:
$ref: /schemas/graph.yaml#/properties/port
description: Input endpoint of the controller, usually dp_intf
port@1:
$ref: /schemas/graph.yaml#/$defs/port-base
unevaluatedProperties: false
description: Output endpoint of the controller
properties:
endpoint:
$ref: /schemas/media/video-interfaces.yaml#
unevaluatedProperties: false
properties:
data-lanes:
description: |
number of lanes supported by the hardware.
The possible values:
0 - For 1 lane enabled in IP.
0 1 - For 2 lanes enabled in IP.
0 1 2 3 - For 4 lanes enabled in IP.
minItems: 1
maxItems: 4
required:
- data-lanes
required:
- port@0
- port@1
max-linkrate-mhz:
enum: [ 1620, 2700, 5400, 8100 ]
description: maximum link rate supported by the hardware.
required:
- compatible
- reg
- interrupts
- ports
- max-linkrate-mhz
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/power/mt8195-power.h>
dptx@1c600000 {
compatible = "mediatek,mt8195-dp-tx";
reg = <0x1c600000 0x8000>;
power-domains = <&spm MT8195_POWER_DOMAIN_DP_TX>;
interrupts = <GIC_SPI 458 IRQ_TYPE_LEVEL_HIGH 0>;
max-linkrate-mhz = <8100>;
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
dptx_in: endpoint {
remote-endpoint = <&dp_intf0_out>;
};
};
port@1 {
reg = <1>;
dptx_out: endpoint {
data-lanes = <0 1 2 3>;
};
};
};
};

View File

@ -118,15 +118,10 @@ Add Plane Features
There's lots of plane features we could add support for:
- Clearing primary plane: clear primary plane before plane composition (at the
start) for correctness of pixel blend ops. It also guarantees alpha channel
is cleared in the target buffer for stable crc. [Good to get started]
- ARGB format on primary plane: blend the primary plane into background with
translucent alpha.
- Support when the primary plane isn't exactly matching the output size: blend
the primary plane into the black background.
- Add background color KMS property[Good to get started].
- Full alpha blending on all planes.

View File

@ -6419,6 +6419,11 @@ S: Maintained
F: Documentation/devicetree/bindings/display/panel/feiyang,fy07024di26a30d.yaml
F: drivers/gpu/drm/panel/panel-feiyang-fy07024di26a30d.c
DRM DRIVER FOR GENERIC EDP PANELS
R: Douglas Anderson <dianders@chromium.org>
F: Documentation/devicetree/bindings/display/panel/panel-edp.yaml
F: drivers/gpu/drm/panel/panel-edp.c
DRM DRIVER FOR GENERIC USB DISPLAY
M: Noralf Trønnes <noralf@tronnes.org>
S: Maintained

View File

@ -124,17 +124,20 @@ static int begin_cpu_udmabuf(struct dma_buf *buf,
{
struct udmabuf *ubuf = buf->priv;
struct device *dev = ubuf->device->this_device;
int ret = 0;
if (!ubuf->sg) {
ubuf->sg = get_sg_table(dev, buf, direction);
if (IS_ERR(ubuf->sg))
return PTR_ERR(ubuf->sg);
if (IS_ERR(ubuf->sg)) {
ret = PTR_ERR(ubuf->sg);
ubuf->sg = NULL;
}
} else {
dma_sync_sg_for_cpu(dev, ubuf->sg->sgl, ubuf->sg->nents,
direction);
}
return 0;
return ret;
}
static int end_cpu_udmabuf(struct dma_buf *buf,

View File

@ -31,6 +31,7 @@ menuconfig DRM
config DRM_MIPI_DBI
tristate
depends on DRM
select DRM_KMS_HELPER
config DRM_MIPI_DSI
bool

View File

@ -204,6 +204,42 @@ void amdgpu_gtt_mgr_recover(struct amdgpu_gtt_mgr *mgr)
amdgpu_gart_invalidate_tlb(adev);
}
/**
* amdgpu_gtt_mgr_intersects - test for intersection
*
* @man: Our manager object
* @res: The resource to test
* @place: The place for the new allocation
* @size: The size of the new allocation
*
* Simplified intersection test, only interesting if we need GART or not.
*/
static bool amdgpu_gtt_mgr_intersects(struct ttm_resource_manager *man,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
return !place->lpfn || amdgpu_gtt_mgr_has_gart_addr(res);
}
/**
* amdgpu_gtt_mgr_compatible - test for compatibility
*
* @man: Our manager object
* @res: The resource to test
* @place: The place for the new allocation
* @size: The size of the new allocation
*
* Simplified compatibility test.
*/
static bool amdgpu_gtt_mgr_compatible(struct ttm_resource_manager *man,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
return !place->lpfn || amdgpu_gtt_mgr_has_gart_addr(res);
}
/**
* amdgpu_gtt_mgr_debug - dump VRAM table
*
@ -225,6 +261,8 @@ static void amdgpu_gtt_mgr_debug(struct ttm_resource_manager *man,
static const struct ttm_resource_manager_func amdgpu_gtt_mgr_func = {
.alloc = amdgpu_gtt_mgr_new,
.free = amdgpu_gtt_mgr_del,
.intersects = amdgpu_gtt_mgr_intersects,
.compatible = amdgpu_gtt_mgr_compatible,
.debug = amdgpu_gtt_mgr_debug
};

View File

@ -1330,11 +1330,12 @@ uint64_t amdgpu_ttm_tt_pte_flags(struct amdgpu_device *adev, struct ttm_tt *ttm,
static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
const struct ttm_place *place)
{
unsigned long num_pages = bo->resource->num_pages;
struct dma_resv_iter resv_cursor;
struct amdgpu_res_cursor cursor;
struct dma_fence *f;
if (!amdgpu_bo_is_amdgpu_bo(bo))
return ttm_bo_eviction_valuable(bo, place);
/* Swapout? */
if (bo->resource->mem_type == TTM_PL_SYSTEM)
return true;
@ -1353,39 +1354,19 @@ static bool amdgpu_ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
return false;
}
switch (bo->resource->mem_type) {
case AMDGPU_PL_PREEMPT:
/* Preemptible BOs don't own system resources managed by the
* driver (pages, VRAM, GART space). They point to resources
* owned by someone else (e.g. pageable memory in user mode
* or a DMABuf). They are used in a preemptible context so we
* can guarantee no deadlocks and good QoS in case of MMU
* notifiers or DMABuf move notifiers from the resource owner.
*/
return false;
case TTM_PL_TT:
if (amdgpu_bo_is_amdgpu_bo(bo) &&
amdgpu_bo_encrypted(ttm_to_amdgpu_bo(bo)))
return false;
return true;
case TTM_PL_VRAM:
/* Check each drm MM node individually */
amdgpu_res_first(bo->resource, 0, (u64)num_pages << PAGE_SHIFT,
&cursor);
while (cursor.remaining) {
if (place->fpfn < PFN_DOWN(cursor.start + cursor.size)
&& !(place->lpfn &&
place->lpfn <= PFN_DOWN(cursor.start)))
return true;
amdgpu_res_next(&cursor, cursor.size);
}
/* Preemptible BOs don't own system resources managed by the
* driver (pages, VRAM, GART space). They point to resources
* owned by someone else (e.g. pageable memory in user mode
* or a DMABuf). They are used in a preemptible context so we
* can guarantee no deadlocks and good QoS in case of MMU
* notifiers or DMABuf move notifiers from the resource owner.
*/
if (bo->resource->mem_type == AMDGPU_PL_PREEMPT)
return false;
default:
break;
}
if (bo->resource->mem_type == TTM_PL_TT &&
amdgpu_bo_encrypted(ttm_to_amdgpu_bo(bo)))
return false;
return ttm_bo_eviction_valuable(bo, place);
}

View File

@ -720,6 +720,72 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct amdgpu_vram_mgr *mgr)
return atomic64_read(&mgr->vis_usage);
}
/**
* amdgpu_vram_mgr_intersects - test each drm buddy block for intersection
*
* @man: TTM memory type manager
* @res: The resource to test
* @place: The place to test against
* @size: Size of the new allocation
*
* Test each drm buddy block for intersection for eviction decision.
*/
static bool amdgpu_vram_mgr_intersects(struct ttm_resource_manager *man,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
struct amdgpu_vram_mgr_resource *mgr = to_amdgpu_vram_mgr_resource(res);
struct drm_buddy_block *block;
/* Check each drm buddy block individually */
list_for_each_entry(block, &mgr->blocks, link) {
unsigned long fpfn =
amdgpu_vram_mgr_block_start(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
(amdgpu_vram_mgr_block_size(block) >> PAGE_SHIFT);
if (place->fpfn < lpfn &&
(place->lpfn && place->lpfn > fpfn))
return true;
}
return false;
}
/**
* amdgpu_vram_mgr_compatible - test each drm buddy block for compatibility
*
* @man: TTM memory type manager
* @res: The resource to test
* @place: The place to test against
* @size: Size of the new allocation
*
* Test each drm buddy block for placement compatibility.
*/
static bool amdgpu_vram_mgr_compatible(struct ttm_resource_manager *man,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
struct amdgpu_vram_mgr_resource *mgr = to_amdgpu_vram_mgr_resource(res);
struct drm_buddy_block *block;
/* Check each drm buddy block individually */
list_for_each_entry(block, &mgr->blocks, link) {
unsigned long fpfn =
amdgpu_vram_mgr_block_start(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
(amdgpu_vram_mgr_block_size(block) >> PAGE_SHIFT);
if (fpfn < place->fpfn ||
(place->lpfn && lpfn > place->lpfn))
return false;
}
return true;
}
/**
* amdgpu_vram_mgr_debug - dump VRAM table
*
@ -753,6 +819,8 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man,
static const struct ttm_resource_manager_func amdgpu_vram_mgr_func = {
.alloc = amdgpu_vram_mgr_new,
.free = amdgpu_vram_mgr_del,
.intersects = amdgpu_vram_mgr_intersects,
.compatible = amdgpu_vram_mgr_compatible,
.debug = amdgpu_vram_mgr_debug
};

View File

@ -2808,7 +2808,8 @@ static const struct drm_mode_config_funcs amdgpu_dm_mode_funcs = {
};
static struct drm_mode_config_helper_funcs amdgpu_dm_mode_config_helperfuncs = {
.atomic_commit_tail = amdgpu_dm_atomic_commit_tail
.atomic_commit_tail = amdgpu_dm_atomic_commit_tail,
.atomic_commit_setup = drm_dp_mst_atomic_setup_commit,
};
static void update_connector_ext_caps(struct amdgpu_dm_connector *aconnector)
@ -6295,10 +6296,17 @@ amdgpu_dm_connector_atomic_check(struct drm_connector *conn,
drm_atomic_get_old_connector_state(state, conn);
struct drm_crtc *crtc = new_con_state->crtc;
struct drm_crtc_state *new_crtc_state;
struct amdgpu_dm_connector *aconn = to_amdgpu_dm_connector(conn);
int ret;
trace_amdgpu_dm_connector_atomic_check(new_con_state);
if (conn->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
ret = drm_dp_mst_root_conn_atomic_check(new_con_state, &aconn->mst_mgr);
if (ret < 0)
return ret;
}
if (!crtc)
return 0;
@ -6382,6 +6390,7 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
const struct drm_display_mode *adjusted_mode = &crtc_state->adjusted_mode;
struct drm_dp_mst_topology_mgr *mst_mgr;
struct drm_dp_mst_port *mst_port;
struct drm_dp_mst_topology_state *mst_state;
enum dc_color_depth color_depth;
int clock, bpp = 0;
bool is_y420 = false;
@ -6395,6 +6404,13 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
if (!crtc_state->connectors_changed && !crtc_state->mode_changed)
return 0;
mst_state = drm_atomic_get_mst_topology_state(state, mst_mgr);
if (IS_ERR(mst_state))
return PTR_ERR(mst_state);
if (!mst_state->pbn_div)
mst_state->pbn_div = dm_mst_get_pbn_divider(aconnector->mst_port->dc_link);
if (!state->duplicated) {
int max_bpc = conn_state->max_requested_bpc;
is_y420 = drm_mode_is_420_also(&connector->display_info, adjusted_mode) &&
@ -6406,11 +6422,10 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder,
clock = adjusted_mode->clock;
dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false);
}
dm_new_connector_state->vcpi_slots = drm_dp_atomic_find_vcpi_slots(state,
mst_mgr,
mst_port,
dm_new_connector_state->pbn,
dm_mst_get_pbn_divider(aconnector->dc_link));
dm_new_connector_state->vcpi_slots =
drm_dp_atomic_find_time_slots(state, mst_mgr, mst_port,
dm_new_connector_state->pbn);
if (dm_new_connector_state->vcpi_slots < 0) {
DRM_DEBUG_ATOMIC("failed finding vcpi slots: %d\n", (int)dm_new_connector_state->vcpi_slots);
return dm_new_connector_state->vcpi_slots;
@ -6480,18 +6495,12 @@ static int dm_update_mst_vcpi_slots_for_dsc(struct drm_atomic_state *state,
dm_conn_state->pbn = pbn;
dm_conn_state->vcpi_slots = slot_num;
drm_dp_mst_atomic_enable_dsc(state,
aconnector->port,
dm_conn_state->pbn,
0,
drm_dp_mst_atomic_enable_dsc(state, aconnector->port, dm_conn_state->pbn,
false);
continue;
}
vcpi = drm_dp_mst_atomic_enable_dsc(state,
aconnector->port,
pbn, pbn_div,
true);
vcpi = drm_dp_mst_atomic_enable_dsc(state, aconnector->port, pbn, true);
if (vcpi < 0)
return vcpi;
@ -7966,6 +7975,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
DRM_ERROR("Waiting for fences timed out!");
drm_atomic_helper_update_legacy_modeset_state(dev, state);
drm_dp_mst_atomic_wait_for_dependencies(state);
dm_state = dm_atomic_get_new_state(state);
if (dm_state && dm_state->context) {
@ -8364,7 +8374,6 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
dc_release_state(dc_state_temp);
}
static int dm_force_atomic_commit(struct drm_connector *connector)
{
int ret = 0;
@ -9335,8 +9344,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state;
#if defined(CONFIG_DRM_AMD_DC_DCN)
struct dsc_mst_fairness_vars vars[MAX_PIPES];
struct drm_dp_mst_topology_state *mst_state;
struct drm_dp_mst_topology_mgr *mgr;
#endif
trace_amdgpu_dm_atomic_check_begin(state);
@ -9575,33 +9582,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
lock_and_validation_needed = true;
}
#if defined(CONFIG_DRM_AMD_DC_DCN)
/* set the slot info for each mst_state based on the link encoding format */
for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) {
struct amdgpu_dm_connector *aconnector;
struct drm_connector *connector;
struct drm_connector_list_iter iter;
u8 link_coding_cap;
if (!mgr->mst_state )
continue;
drm_connector_list_iter_begin(dev, &iter);
drm_for_each_connector_iter(connector, &iter) {
int id = connector->index;
if (id == mst_state->mgr->conn_base_id) {
aconnector = to_amdgpu_dm_connector(connector);
link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link);
drm_dp_mst_update_slots(mst_state, link_coding_cap);
break;
}
}
drm_connector_list_iter_end(&iter);
}
#endif
/**
* Streams and planes are reset when there are changes that affect
* bandwidth. Anything that affects bandwidth needs to go through

View File

@ -27,6 +27,7 @@
#include <linux/acpi.h>
#include <linux/i2c.h>
#include <drm/drm_atomic.h>
#include <drm/drm_probe_helper.h>
#include <drm/amdgpu_drm.h>
#include <drm/drm_edid.h>
@ -153,41 +154,28 @@ enum dc_edid_status dm_helpers_parse_edid_caps(
return result;
}
static void get_payload_table(
struct amdgpu_dm_connector *aconnector,
struct dp_mst_stream_allocation_table *proposed_table)
static void
fill_dc_mst_payload_table_from_drm(struct drm_dp_mst_topology_state *mst_state,
struct amdgpu_dm_connector *aconnector,
struct dc_dp_mst_stream_allocation_table *table)
{
int i;
struct drm_dp_mst_topology_mgr *mst_mgr =
&aconnector->mst_port->mst_mgr;
struct dc_dp_mst_stream_allocation_table new_table = { 0 };
struct dc_dp_mst_stream_allocation *sa;
struct drm_dp_mst_atomic_payload *payload;
mutex_lock(&mst_mgr->payload_lock);
/* Fill payload info*/
list_for_each_entry(payload, &mst_state->payloads, next) {
if (payload->delete)
continue;
proposed_table->stream_count = 0;
/* number of active streams */
for (i = 0; i < mst_mgr->max_payloads; i++) {
if (mst_mgr->payloads[i].num_slots == 0)
break; /* end of vcp_id table */
ASSERT(mst_mgr->payloads[i].payload_state !=
DP_PAYLOAD_DELETE_LOCAL);
if (mst_mgr->payloads[i].payload_state == DP_PAYLOAD_LOCAL ||
mst_mgr->payloads[i].payload_state ==
DP_PAYLOAD_REMOTE) {
struct dp_mst_stream_allocation *sa =
&proposed_table->stream_allocations[
proposed_table->stream_count];
sa->slot_count = mst_mgr->payloads[i].num_slots;
sa->vcp_id = mst_mgr->proposed_vcpis[i]->vcpi;
proposed_table->stream_count++;
}
sa = &new_table.stream_allocations[new_table.stream_count];
sa->slot_count = payload->time_slots;
sa->vcp_id = payload->vcpi;
new_table.stream_count++;
}
mutex_unlock(&mst_mgr->payload_lock);
/* Overwrite the old table */
*table = new_table;
}
void dm_helpers_dp_update_branch_info(
@ -201,15 +189,13 @@ void dm_helpers_dp_update_branch_info(
bool dm_helpers_dp_mst_write_payload_allocation_table(
struct dc_context *ctx,
const struct dc_stream_state *stream,
struct dp_mst_stream_allocation_table *proposed_table,
struct dc_dp_mst_stream_allocation_table *proposed_table,
bool enable)
{
struct amdgpu_dm_connector *aconnector;
struct dm_connector_state *dm_conn_state;
struct drm_dp_mst_topology_state *mst_state;
struct drm_dp_mst_atomic_payload *payload;
struct drm_dp_mst_topology_mgr *mst_mgr;
struct drm_dp_mst_port *mst_port;
bool ret;
u8 link_coding_cap = DP_8b_10b_ENCODING;
aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
/* Accessing the connector state is required for vcpi_slots allocation
@ -220,40 +206,21 @@ bool dm_helpers_dp_mst_write_payload_allocation_table(
if (!aconnector || !aconnector->mst_port)
return false;
dm_conn_state = to_dm_connector_state(aconnector->base.state);
mst_mgr = &aconnector->mst_port->mst_mgr;
if (!mst_mgr->mst_state)
return false;
mst_port = aconnector->port;
#if defined(CONFIG_DRM_AMD_DC_DCN)
link_coding_cap = dc_link_dp_mst_decide_link_encoding_format(aconnector->dc_link);
#endif
if (enable) {
ret = drm_dp_mst_allocate_vcpi(mst_mgr, mst_port,
dm_conn_state->pbn,
dm_conn_state->vcpi_slots);
if (!ret)
return false;
} else {
drm_dp_mst_reset_vcpi_slots(mst_mgr, mst_port);
}
mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state);
/* It's OK for this to fail */
drm_dp_update_payload_part1(mst_mgr, (link_coding_cap == DP_CAP_ANSI_128B132B) ? 0:1);
payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->port);
if (enable)
drm_dp_add_payload_part1(mst_mgr, mst_state, payload);
else
drm_dp_remove_payload(mst_mgr, mst_state, payload);
/* mst_mgr->->payloads are VC payload notify MST branch using DPCD or
* AUX message. The sequence is slot 1-63 allocated sequence for each
* stream. AMD ASIC stream slot allocation should follow the same
* sequence. copy DRM MST allocation to dc */
get_payload_table(aconnector, proposed_table);
fill_dc_mst_payload_table_from_drm(mst_state, aconnector, proposed_table);
return true;
}
@ -310,8 +277,9 @@ bool dm_helpers_dp_mst_send_payload_allocation(
bool enable)
{
struct amdgpu_dm_connector *aconnector;
struct drm_dp_mst_topology_state *mst_state;
struct drm_dp_mst_topology_mgr *mst_mgr;
struct drm_dp_mst_port *mst_port;
struct drm_dp_mst_atomic_payload *payload;
enum mst_progress_status set_flag = MST_ALLOCATE_NEW_PAYLOAD;
enum mst_progress_status clr_flag = MST_CLEAR_ALLOCATED_PAYLOAD;
@ -320,19 +288,16 @@ bool dm_helpers_dp_mst_send_payload_allocation(
if (!aconnector || !aconnector->mst_port)
return false;
mst_port = aconnector->port;
mst_mgr = &aconnector->mst_port->mst_mgr;
mst_state = to_drm_dp_mst_topology_state(mst_mgr->base.state);
if (!mst_mgr->mst_state)
return false;
payload = drm_atomic_get_mst_payload_state(mst_state, aconnector->port);
if (!enable) {
set_flag = MST_CLEAR_ALLOCATED_PAYLOAD;
clr_flag = MST_ALLOCATE_NEW_PAYLOAD;
}
if (drm_dp_update_payload_part2(mst_mgr)) {
if (enable && drm_dp_add_payload_part2(mst_mgr, mst_state->base.state, payload)) {
amdgpu_dm_set_mst_status(&aconnector->mst_status,
set_flag, false);
} else {
@ -342,9 +307,6 @@ bool dm_helpers_dp_mst_send_payload_allocation(
clr_flag, false);
}
if (!enable)
drm_dp_mst_deallocate_vcpi(mst_mgr, mst_port);
return true;
}

View File

@ -447,34 +447,13 @@ dm_dp_mst_detect(struct drm_connector *connector,
}
static int dm_dp_mst_atomic_check(struct drm_connector *connector,
struct drm_atomic_state *state)
struct drm_atomic_state *state)
{
struct drm_connector_state *new_conn_state =
drm_atomic_get_new_connector_state(state, connector);
struct drm_connector_state *old_conn_state =
drm_atomic_get_old_connector_state(state, connector);
struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
struct drm_crtc_state *new_crtc_state;
struct drm_dp_mst_topology_mgr *mst_mgr;
struct drm_dp_mst_port *mst_port;
struct drm_dp_mst_topology_mgr *mst_mgr = &aconnector->mst_port->mst_mgr;
struct drm_dp_mst_port *mst_port = aconnector->port;
mst_port = aconnector->port;
mst_mgr = &aconnector->mst_port->mst_mgr;
if (!old_conn_state->crtc)
return 0;
if (new_conn_state->crtc) {
new_crtc_state = drm_atomic_get_new_crtc_state(state, new_conn_state->crtc);
if (!new_crtc_state ||
!drm_atomic_crtc_needs_modeset(new_crtc_state) ||
new_crtc_state->enable)
return 0;
}
return drm_dp_atomic_release_vcpi_slots(state,
mst_mgr,
mst_port);
return drm_dp_atomic_release_time_slots(state, mst_mgr, mst_port);
}
static const struct drm_connector_helper_funcs dm_dp_mst_connector_helper_funcs = {
@ -618,15 +597,8 @@ void amdgpu_dm_initialize_dp_connector(struct amdgpu_display_manager *dm,
dc_link_dp_get_max_link_enc_cap(aconnector->dc_link, &max_link_enc_cap);
aconnector->mst_mgr.cbs = &dm_mst_cbs;
drm_dp_mst_topology_mgr_init(
&aconnector->mst_mgr,
adev_to_drm(dm->adev),
&aconnector->dm_dp_aux.aux,
16,
4,
max_link_enc_cap.lane_count,
drm_dp_bw_code_to_link_rate(max_link_enc_cap.link_rate),
aconnector->connector_id);
drm_dp_mst_topology_mgr_init(&aconnector->mst_mgr, adev_to_drm(dm->adev),
&aconnector->dm_dp_aux.aux, 16, 4, aconnector->connector_id);
drm_connector_attach_dp_subconnector_property(&aconnector->base);
}
@ -731,6 +703,7 @@ static int bpp_x16_from_pbn(struct dsc_mst_fairness_params param, int pbn)
}
static bool increase_dsc_bpp(struct drm_atomic_state *state,
struct drm_dp_mst_topology_state *mst_state,
struct dc_link *dc_link,
struct dsc_mst_fairness_params *params,
struct dsc_mst_fairness_vars *vars,
@ -743,12 +716,9 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
int min_initial_slack;
int next_index;
int remaining_to_increase = 0;
int pbn_per_timeslot;
int link_timeslots_used;
int fair_pbn_alloc;
pbn_per_timeslot = dm_mst_get_pbn_divider(dc_link);
for (i = 0; i < count; i++) {
if (vars[i + k].dsc_enabled) {
initial_slack[i] =
@ -779,46 +749,43 @@ static bool increase_dsc_bpp(struct drm_atomic_state *state,
link_timeslots_used = 0;
for (i = 0; i < count; i++)
link_timeslots_used += DIV_ROUND_UP(vars[i + k].pbn, pbn_per_timeslot);
link_timeslots_used += DIV_ROUND_UP(vars[i + k].pbn, mst_state->pbn_div);
fair_pbn_alloc = (63 - link_timeslots_used) / remaining_to_increase * pbn_per_timeslot;
fair_pbn_alloc =
(63 - link_timeslots_used) / remaining_to_increase * mst_state->pbn_div;
if (initial_slack[next_index] > fair_pbn_alloc) {
vars[next_index].pbn += fair_pbn_alloc;
if (drm_dp_atomic_find_vcpi_slots(state,
if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr,
params[next_index].port,
vars[next_index].pbn,
pbn_per_timeslot) < 0)
vars[next_index].pbn) < 0)
return false;
if (!drm_dp_mst_atomic_check(state)) {
vars[next_index].bpp_x16 = bpp_x16_from_pbn(params[next_index], vars[next_index].pbn);
} else {
vars[next_index].pbn -= fair_pbn_alloc;
if (drm_dp_atomic_find_vcpi_slots(state,
if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr,
params[next_index].port,
vars[next_index].pbn,
pbn_per_timeslot) < 0)
vars[next_index].pbn) < 0)
return false;
}
} else {
vars[next_index].pbn += initial_slack[next_index];
if (drm_dp_atomic_find_vcpi_slots(state,
if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr,
params[next_index].port,
vars[next_index].pbn,
pbn_per_timeslot) < 0)
vars[next_index].pbn) < 0)
return false;
if (!drm_dp_mst_atomic_check(state)) {
vars[next_index].bpp_x16 = params[next_index].bw_range.max_target_bpp_x16;
} else {
vars[next_index].pbn -= initial_slack[next_index];
if (drm_dp_atomic_find_vcpi_slots(state,
if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr,
params[next_index].port,
vars[next_index].pbn,
pbn_per_timeslot) < 0)
vars[next_index].pbn) < 0)
return false;
}
}
@ -872,11 +839,10 @@ static bool try_disable_dsc(struct drm_atomic_state *state,
break;
vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.stream_kbps);
if (drm_dp_atomic_find_vcpi_slots(state,
if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr,
params[next_index].port,
vars[next_index].pbn,
dm_mst_get_pbn_divider(dc_link)) < 0)
vars[next_index].pbn) < 0)
return false;
if (!drm_dp_mst_atomic_check(state)) {
@ -884,11 +850,10 @@ static bool try_disable_dsc(struct drm_atomic_state *state,
vars[next_index].bpp_x16 = 0;
} else {
vars[next_index].pbn = kbps_to_peak_pbn(params[next_index].bw_range.max_kbps);
if (drm_dp_atomic_find_vcpi_slots(state,
if (drm_dp_atomic_find_time_slots(state,
params[next_index].port->mgr,
params[next_index].port,
vars[next_index].pbn,
dm_mst_get_pbn_divider(dc_link)) < 0)
vars[next_index].pbn) < 0)
return false;
}
@ -902,17 +867,27 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
struct dc_state *dc_state,
struct dc_link *dc_link,
struct dsc_mst_fairness_vars *vars,
struct drm_dp_mst_topology_mgr *mgr,
int *link_vars_start_index)
{
int i, k;
struct dc_stream_state *stream;
struct dsc_mst_fairness_params params[MAX_PIPES];
struct amdgpu_dm_connector *aconnector;
struct drm_dp_mst_topology_state *mst_state = drm_atomic_get_mst_topology_state(state, mgr);
int count = 0;
int i, k;
bool debugfs_overwrite = false;
memset(params, 0, sizeof(params));
if (IS_ERR(mst_state))
return false;
mst_state->pbn_div = dm_mst_get_pbn_divider(dc_link);
#if defined(CONFIG_DRM_AMD_DC_DCN)
drm_dp_mst_update_slots(mst_state, dc_link_dp_mst_decide_link_encoding_format(dc_link));
#endif
/* Set up params */
for (i = 0; i < dc_state->stream_count; i++) {
struct dc_dsc_policy dsc_policy = {0};
@ -971,11 +946,8 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps);
vars[i + k].dsc_enabled = false;
vars[i + k].bpp_x16 = 0;
if (drm_dp_atomic_find_vcpi_slots(state,
params[i].port->mgr,
params[i].port,
vars[i + k].pbn,
dm_mst_get_pbn_divider(dc_link)) < 0)
if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr, params[i].port,
vars[i + k].pbn) < 0)
return false;
}
if (!drm_dp_mst_atomic_check(state) && !debugfs_overwrite) {
@ -989,21 +961,15 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.min_kbps);
vars[i + k].dsc_enabled = true;
vars[i + k].bpp_x16 = params[i].bw_range.min_target_bpp_x16;
if (drm_dp_atomic_find_vcpi_slots(state,
params[i].port->mgr,
params[i].port,
vars[i + k].pbn,
dm_mst_get_pbn_divider(dc_link)) < 0)
if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr,
params[i].port, vars[i + k].pbn) < 0)
return false;
} else {
vars[i + k].pbn = kbps_to_peak_pbn(params[i].bw_range.stream_kbps);
vars[i + k].dsc_enabled = false;
vars[i + k].bpp_x16 = 0;
if (drm_dp_atomic_find_vcpi_slots(state,
params[i].port->mgr,
params[i].port,
vars[i + k].pbn,
dm_mst_get_pbn_divider(dc_link)) < 0)
if (drm_dp_atomic_find_time_slots(state, params[i].port->mgr,
params[i].port, vars[i + k].pbn) < 0)
return false;
}
}
@ -1011,7 +977,7 @@ static bool compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
return false;
/* Optimize degree of compression */
if (!increase_dsc_bpp(state, dc_link, params, vars, count, k))
if (!increase_dsc_bpp(state, mst_state, dc_link, params, vars, count, k))
return false;
if (!try_disable_dsc(state, dc_link, params, vars, count, k))
@ -1157,8 +1123,9 @@ bool compute_mst_dsc_configs_for_state(struct drm_atomic_state *state,
continue;
mutex_lock(&aconnector->mst_mgr.lock);
if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link,
vars, &link_vars_start_index)) {
if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars,
&aconnector->mst_mgr,
&link_vars_start_index)) {
mutex_unlock(&aconnector->mst_mgr.lock);
return false;
}
@ -1216,10 +1183,8 @@ static bool
continue;
mutex_lock(&aconnector->mst_mgr.lock);
if (!compute_mst_dsc_configs_for_link(state,
dc_state,
stream->link,
vars,
if (!compute_mst_dsc_configs_for_link(state, dc_state, stream->link, vars,
&aconnector->mst_mgr,
&link_vars_start_index)) {
mutex_unlock(&aconnector->mst_mgr.lock);
return false;

View File

@ -3516,7 +3516,7 @@ static void update_mst_stream_alloc_table(
struct dc_link *link,
struct stream_encoder *stream_enc,
struct hpo_dp_stream_encoder *hpo_dp_stream_enc, // TODO: Rename stream_enc to dio_stream_enc?
const struct dp_mst_stream_allocation_table *proposed_table)
const struct dc_dp_mst_stream_allocation_table *proposed_table)
{
struct link_mst_stream_allocation work_table[MAX_CONTROLLER_NUM] = { 0 };
struct link_mst_stream_allocation *dc_alloc;
@ -3679,7 +3679,7 @@ enum dc_status dc_link_allocate_mst_payload(struct pipe_ctx *pipe_ctx)
{
struct dc_stream_state *stream = pipe_ctx->stream;
struct dc_link *link = stream->link;
struct dp_mst_stream_allocation_table proposed_table = {0};
struct dc_dp_mst_stream_allocation_table proposed_table = {0};
struct fixed31_32 avg_time_slots_per_mtp;
struct fixed31_32 pbn;
struct fixed31_32 pbn_per_slot;
@ -3784,7 +3784,7 @@ enum dc_status dc_link_reduce_mst_payload(struct pipe_ctx *pipe_ctx, uint32_t bw
struct fixed31_32 avg_time_slots_per_mtp;
struct fixed31_32 pbn;
struct fixed31_32 pbn_per_slot;
struct dp_mst_stream_allocation_table proposed_table = {0};
struct dc_dp_mst_stream_allocation_table proposed_table = {0};
uint8_t i;
const struct link_hwss *link_hwss = get_link_hwss(link, &pipe_ctx->link_res);
DC_LOGGER_INIT(link->ctx->logger);
@ -3873,7 +3873,7 @@ enum dc_status dc_link_increase_mst_payload(struct pipe_ctx *pipe_ctx, uint32_t
struct fixed31_32 avg_time_slots_per_mtp;
struct fixed31_32 pbn;
struct fixed31_32 pbn_per_slot;
struct dp_mst_stream_allocation_table proposed_table = {0};
struct dc_dp_mst_stream_allocation_table proposed_table = {0};
uint8_t i;
enum act_return_status ret;
const struct link_hwss *link_hwss = get_link_hwss(link, &pipe_ctx->link_res);
@ -3957,7 +3957,7 @@ static enum dc_status deallocate_mst_payload(struct pipe_ctx *pipe_ctx)
{
struct dc_stream_state *stream = pipe_ctx->stream;
struct dc_link *link = stream->link;
struct dp_mst_stream_allocation_table proposed_table = {0};
struct dc_dp_mst_stream_allocation_table proposed_table = {0};
struct fixed31_32 avg_time_slots_per_mtp = dc_fixpt_from_int(0);
int i;
bool mst_mode = (link->type == dc_connection_mst_branch);

View File

@ -33,7 +33,7 @@
#include "dc_types.h"
#include "dc.h"
struct dp_mst_stream_allocation_table;
struct dc_dp_mst_stream_allocation_table;
struct aux_payload;
enum aux_return_code_type;
@ -77,7 +77,7 @@ void dm_helpers_dp_update_branch_info(
bool dm_helpers_dp_mst_write_payload_allocation_table(
struct dc_context *ctx,
const struct dc_stream_state *stream,
struct dp_mst_stream_allocation_table *proposed_table,
struct dc_dp_mst_stream_allocation_table *proposed_table,
bool enable);
/*

View File

@ -246,8 +246,16 @@ union dpcd_training_lane_set {
};
/* AMD's copy of various payload data for MST. We have two copies of the payload table (one in DRM,
* one in DC) since DRM's MST helpers can't be accessed here. This stream allocation table should
* _ONLY_ be filled out from DM and then passed to DC, do NOT use these for _any_ kind of atomic
* state calculations in DM, or you will break something.
*/
struct drm_dp_mst_port;
/* DP MST stream allocation (payload bandwidth number) */
struct dp_mst_stream_allocation {
struct dc_dp_mst_stream_allocation {
uint8_t vcp_id;
/* number of slots required for the DP stream in
* transport packet */
@ -255,11 +263,11 @@ struct dp_mst_stream_allocation {
};
/* DP MST stream allocation table */
struct dp_mst_stream_allocation_table {
struct dc_dp_mst_stream_allocation_table {
/* number of DP video streams */
int stream_count;
/* array of stream allocations */
struct dp_mst_stream_allocation stream_allocations[MAX_CONTROLLER_NUM];
struct dc_dp_mst_stream_allocation stream_allocations[MAX_CONTROLLER_NUM];
};
#endif /*__DAL_LINK_SERVICE_TYPES_H__*/

View File

@ -1440,6 +1440,20 @@ static void anx7625_start_dp_work(struct anx7625_data *ctx)
static int anx7625_read_hpd_status_p0(struct anx7625_data *ctx)
{
int ret;
/* Set irq detect window to 2ms */
ret = anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
HPD_DET_TIMER_BIT0_7, HPD_TIME & 0xFF);
ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
HPD_DET_TIMER_BIT8_15,
(HPD_TIME >> 8) & 0xFF);
ret |= anx7625_reg_write(ctx, ctx->i2c.tx_p2_client,
HPD_DET_TIMER_BIT16_23,
(HPD_TIME >> 16) & 0xFF);
if (ret < 0)
return ret;
return anx7625_reg_read(ctx, ctx->i2c.rx_p0_client, SYSTEM_STSTUS);
}
@ -1797,8 +1811,13 @@ static int anx7625_audio_hw_params(struct device *dev, void *data,
int wl, ch, rate;
int ret = 0;
if (fmt->fmt != HDMI_DSP_A) {
DRM_DEV_ERROR(dev, "only supports DSP_A\n");
if (anx7625_sink_detect(ctx) == connector_status_disconnected) {
DRM_DEV_DEBUG_DRIVER(dev, "DP not connected\n");
return 0;
}
if (fmt->fmt != HDMI_DSP_A && fmt->fmt != HDMI_I2S) {
DRM_DEV_ERROR(dev, "only supports DSP_A & I2S\n");
return -EINVAL;
}
@ -1806,10 +1825,16 @@ static int anx7625_audio_hw_params(struct device *dev, void *data,
params->sample_rate, params->sample_width,
params->cea.channels);
ret |= anx7625_write_and_or(ctx, ctx->i2c.tx_p2_client,
AUDIO_CHANNEL_STATUS_6,
~I2S_SLAVE_MODE,
TDM_SLAVE_MODE);
if (fmt->fmt == HDMI_DSP_A)
ret = anx7625_write_and_or(ctx, ctx->i2c.tx_p2_client,
AUDIO_CHANNEL_STATUS_6,
~I2S_SLAVE_MODE,
TDM_SLAVE_MODE);
else
ret = anx7625_write_and_or(ctx, ctx->i2c.tx_p2_client,
AUDIO_CHANNEL_STATUS_6,
~TDM_SLAVE_MODE,
I2S_SLAVE_MODE);
/* Word length */
switch (params->sample_width) {

View File

@ -132,6 +132,12 @@
#define I2S_SLAVE_MODE 0x08
#define AUDIO_LAYOUT 0x01
#define HPD_DET_TIMER_BIT0_7 0xea
#define HPD_DET_TIMER_BIT8_15 0xeb
#define HPD_DET_TIMER_BIT16_23 0xec
/* HPD debounce time 2ms for 27M clock */
#define HPD_TIME 54000
#define AUDIO_CONTROL_REGISTER 0xe6
#define TDM_TIMING_MODE 0x08

View File

@ -2605,7 +2605,8 @@ static int cdns_mhdp_remove(struct platform_device *pdev)
pm_runtime_disable(&pdev->dev);
cancel_work_sync(&mhdp->modeset_retry_work);
flush_scheduled_work();
flush_work(&mhdp->hpd_work);
/* Ignoring mhdp->hdcp.check_work and mhdp->hdcp.prop_work here. */
clk_disable_unprepare(mhdp->clk);

View File

@ -11,6 +11,7 @@
#include <linux/bitfield.h>
#include <linux/bits.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/gpio/consumer.h>
#include <linux/i2c.h>
@ -151,6 +152,8 @@ struct chipone {
struct regulator *vdd1;
struct regulator *vdd2;
struct regulator *vdd3;
struct clk *refclk;
unsigned long refclk_rate;
bool interface_i2c;
};
@ -259,7 +262,7 @@ static void chipone_configure_pll(struct chipone *icn,
/*
* DSI byte clock frequency (input into PLL) is calculated as:
* DSI_CLK = mode clock * bpp / dsi_data_lanes / 8
* DSI_CLK = HS clock / 4
*
* DPI pixel clock frequency (output from PLL) is mode clock.
*
@ -273,8 +276,10 @@ static void chipone_configure_pll(struct chipone *icn,
* It seems the PLL input clock after applying P pre-divider have
* to be lower than 20 MHz.
*/
fin = mode_clock * mipi_dsi_pixel_format_to_bpp(icn->dsi->format) /
icn->dsi->lanes / 8; /* in Hz */
if (icn->refclk)
fin = icn->refclk_rate;
else
fin = icn->dsi->hs_rate / 4; /* in Hz */
/* Minimum value of P predivider for PLL input in 5..20 MHz */
p_min = clamp(DIV_ROUND_UP(fin, 20000000), 1U, 31U);
@ -319,16 +324,18 @@ static void chipone_configure_pll(struct chipone *icn,
best_p_pot = !(best_p & 1);
dev_dbg(icn->dev,
"PLL: P[3:0]=%d P[4]=2*%d M=%d S[7:5]=2^%d delta=%d => DSI f_in=%d Hz ; DPI f_out=%d Hz\n",
"PLL: P[3:0]=%d P[4]=2*%d M=%d S[7:5]=2^%d delta=%d => DSI f_in(%s)=%d Hz ; DPI f_out=%d Hz\n",
best_p >> best_p_pot, best_p_pot, best_m, best_s + 1,
min_delta, fin, (fin * best_m) / (best_p << (best_s + 1)));
min_delta, icn->refclk ? "EXT" : "DSI", fin,
(fin * best_m) / (best_p << (best_s + 1)));
ref_div = PLL_REF_DIV_P(best_p >> best_p_pot) | PLL_REF_DIV_S(best_s);
if (best_p_pot) /* Prefer /2 pre-divider */
ref_div |= PLL_REF_DIV_Pe;
/* Clock source selection fixed to MIPI DSI clock lane */
chipone_writeb(icn, PLL_CTRL(6), PLL_CTRL_6_MIPI_CLK);
/* Clock source selection either external clock or MIPI DSI clock lane */
chipone_writeb(icn, PLL_CTRL(6),
icn->refclk ? PLL_CTRL_6_EXTERNAL : PLL_CTRL_6_MIPI_CLK);
chipone_writeb(icn, PLL_REF_DIV, ref_div);
chipone_writeb(icn, PLL_INT(0), best_m);
}
@ -464,6 +471,11 @@ static void chipone_atomic_pre_enable(struct drm_bridge *bridge,
"failed to enable VDD3 regulator: %d\n", ret);
}
ret = clk_prepare_enable(icn->refclk);
if (ret)
DRM_DEV_ERROR(icn->dev,
"failed to enable RECLK clock: %d\n", ret);
gpiod_set_value(icn->enable_gpio, 1);
usleep_range(10000, 11000);
@ -474,6 +486,8 @@ static void chipone_atomic_post_disable(struct drm_bridge *bridge,
{
struct chipone *icn = bridge_to_chipone(bridge);
clk_disable_unprepare(icn->refclk);
if (icn->vdd1)
regulator_disable(icn->vdd1);
@ -515,6 +529,8 @@ static int chipone_dsi_attach(struct chipone *icn)
dsi->format = MIPI_DSI_FMT_RGB888;
dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_NO_EOT_PACKET;
dsi->hs_rate = 500000000;
dsi->lp_rate = 16000000;
ret = mipi_dsi_attach(dsi);
if (ret < 0)
@ -617,6 +633,20 @@ static int chipone_parse_dt(struct chipone *icn)
struct device *dev = icn->dev;
int ret;
icn->refclk = devm_clk_get_optional(dev, "refclk");
if (IS_ERR(icn->refclk)) {
ret = PTR_ERR(icn->refclk);
DRM_DEV_ERROR(dev, "failed to get REFCLK clock: %d\n", ret);
return ret;
} else if (icn->refclk) {
icn->refclk_rate = clk_get_rate(icn->refclk);
if (icn->refclk_rate < 10000000 || icn->refclk_rate > 154000000) {
DRM_DEV_ERROR(dev, "REFCLK out of range: %ld Hz\n",
icn->refclk_rate);
return -EINVAL;
}
}
icn->vdd1 = devm_regulator_get_optional(dev, "vdd1");
if (IS_ERR(icn->vdd1)) {
ret = PTR_ERR(icn->vdd1);

View File

@ -68,6 +68,7 @@ enum {
BYTE_SWAP_GBR = 3,
BYTE_SWAP_BRG = 4,
BYTE_SWAP_BGR = 5,
BYTE_SWAP_MAX = 6,
};
/* Page 0, Register 0x19 */
@ -355,6 +356,8 @@ static void ch7033_bridge_mode_set(struct drm_bridge *bridge,
int hsynclen = mode->hsync_end - mode->hsync_start;
int vbporch = mode->vsync_start - mode->vdisplay;
int vsynclen = mode->vsync_end - mode->vsync_start;
u8 byte_swap;
int ret;
/*
* Page 4
@ -398,8 +401,16 @@ static void ch7033_bridge_mode_set(struct drm_bridge *bridge,
regmap_write(priv->regmap, 0x15, vbporch);
regmap_write(priv->regmap, 0x16, vsynclen);
/* Input color swap. */
regmap_update_bits(priv->regmap, 0x18, SWAP, BYTE_SWAP_BGR);
/* Input color swap. Byte order is optional and will default to
* BYTE_SWAP_BGR to preserve backwards compatibility with existing
* driver.
*/
ret = of_property_read_u8(priv->bridge.of_node, "chrontel,byteswap",
&byte_swap);
if (!ret && byte_swap < BYTE_SWAP_MAX)
regmap_update_bits(priv->regmap, 0x18, SWAP, byte_swap);
else
regmap_update_bits(priv->regmap, 0x18, SWAP, BYTE_SWAP_BGR);
/* Input clock and sync polarity. */
regmap_update_bits(priv->regmap, 0x19, 0x1, mode->clock >> 16);

View File

@ -2951,9 +2951,6 @@ static void it6505_bridge_atomic_enable(struct drm_bridge *bridge,
if (ret)
dev_err(dev, "Failed to setup AVI infoframe: %d", ret);
it6505_drm_dp_link_set_power(&it6505->aux, &it6505->link,
DP_SET_POWER_D0);
it6505_update_video_parameter(it6505, mode);
ret = it6505_send_video_infoframe(it6505, &frame);
@ -2963,6 +2960,9 @@ static void it6505_bridge_atomic_enable(struct drm_bridge *bridge,
it6505_int_mask_enable(it6505);
it6505_video_reset(it6505);
it6505_drm_dp_link_set_power(&it6505->aux, &it6505->link,
DP_SET_POWER_D0);
}
static void it6505_bridge_atomic_disable(struct drm_bridge *bridge,
@ -2974,9 +2974,9 @@ static void it6505_bridge_atomic_disable(struct drm_bridge *bridge,
DRM_DEV_DEBUG_DRIVER(dev, "start");
if (it6505->powered) {
it6505_video_disable(it6505);
it6505_drm_dp_link_set_power(&it6505->aux, &it6505->link,
DP_SET_POWER_D3);
it6505_video_disable(it6505);
}
}

View File

@ -296,7 +296,9 @@ static void ge_b850v3_lvds_remove(void)
* This check is to avoid both the drivers
* removing the bridge in their remove() function
*/
if (!ge_b850v3_lvds_ptr)
if (!ge_b850v3_lvds_ptr ||
!ge_b850v3_lvds_ptr->stdp2690_i2c ||
!ge_b850v3_lvds_ptr->stdp4028_i2c)
goto out;
drm_bridge_remove(&ge_b850v3_lvds_ptr->bridge);

View File

@ -375,6 +375,11 @@ static int __maybe_unused ps8640_resume(struct device *dev)
gpiod_set_value(ps_bridge->gpio_reset, 1);
usleep_range(2000, 2500);
gpiod_set_value(ps_bridge->gpio_reset, 0);
/* Double reset for T4 and T5 */
msleep(50);
gpiod_set_value(ps_bridge->gpio_reset, 1);
msleep(50);
gpiod_set_value(ps_bridge->gpio_reset, 0);
/*
* Mystery 200 ms delay for the "MCU to be ready". It's unclear if

View File

@ -3096,6 +3096,7 @@ static irqreturn_t dw_hdmi_irq(int irq, void *dev_id)
{
struct dw_hdmi *hdmi = dev_id;
u8 intr_stat, phy_int_pol, phy_pol_mask, phy_stat;
enum drm_connector_status status = connector_status_unknown;
intr_stat = hdmi_readb(hdmi, HDMI_IH_PHY_STAT0);
phy_int_pol = hdmi_readb(hdmi, HDMI_PHY_POL0);
@ -3134,13 +3135,15 @@ static irqreturn_t dw_hdmi_irq(int irq, void *dev_id)
cec_notifier_phys_addr_invalidate(hdmi->cec_notifier);
mutex_unlock(&hdmi->cec_notifier_mutex);
}
if (phy_stat & HDMI_PHY_HPD)
status = connector_status_connected;
if (!(phy_stat & (HDMI_PHY_HPD | HDMI_PHY_RX_SENSE)))
status = connector_status_disconnected;
}
if (intr_stat & HDMI_IH_PHY_STAT0_HPD) {
enum drm_connector_status status = phy_int_pol & HDMI_PHY_HPD
? connector_status_connected
: connector_status_disconnected;
if (status != connector_status_unknown) {
dev_dbg(hdmi->dev, "EVENT=%s\n",
status == connector_status_connected ?
"plugin" : "plugout");

View File

@ -1913,22 +1913,23 @@ static int tc_mipi_dsi_host_attach(struct tc_data *tc)
static int tc_probe_dpi_bridge_endpoint(struct tc_data *tc)
{
struct device *dev = tc->dev;
struct drm_bridge *bridge;
struct drm_panel *panel;
int ret;
/* port@1 is the DPI input/output port */
ret = drm_of_find_panel_or_bridge(dev->of_node, 1, 0, &panel, NULL);
ret = drm_of_find_panel_or_bridge(dev->of_node, 1, 0, &panel, &bridge);
if (ret && ret != -ENODEV)
return ret;
if (panel) {
struct drm_bridge *panel_bridge;
bridge = devm_drm_panel_bridge_add(dev, panel);
if (IS_ERR(bridge))
return PTR_ERR(bridge);
}
panel_bridge = devm_drm_panel_bridge_add(dev, panel);
if (IS_ERR(panel_bridge))
return PTR_ERR(panel_bridge);
tc->panel_bridge = panel_bridge;
if (bridge) {
tc->panel_bridge = bridge;
tc->bridge.type = DRM_MODE_CONNECTOR_DPI;
tc->bridge.funcs = &tc_dpi_bridge_funcs;

View File

@ -29,6 +29,7 @@
#include <drm/drm_atomic_helper.h>
#include <drm/drm_bridge.h>
#include <drm/drm_bridge_connector.h>
#include <drm/drm_edid.h>
#include <drm/drm_mipi_dsi.h>
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
@ -68,6 +69,7 @@
#define BPP_18_RGB BIT(0)
#define SN_HPD_DISABLE_REG 0x5C
#define HPD_DISABLE BIT(0)
#define HPD_DEBOUNCED_STATE BIT(4)
#define SN_GPIO_IO_REG 0x5E
#define SN_GPIO_INPUT_SHIFT 4
#define SN_GPIO_OUTPUT_SHIFT 0
@ -92,6 +94,8 @@
#define SN_DATARATE_CONFIG_REG 0x94
#define DP_DATARATE_MASK GENMASK(7, 5)
#define DP_DATARATE(x) ((x) << 5)
#define SN_TRAINING_SETTING_REG 0x95
#define SCRAMBLE_DISABLE BIT(4)
#define SN_ML_TX_MODE_REG 0x96
#define ML_TX_MAIN_LINK_OFF 0
#define ML_TX_NORMAL_MODE BIT(0)
@ -747,6 +751,29 @@ ti_sn_bridge_mode_valid(struct drm_bridge *bridge,
if (mode->clock > 594000)
return MODE_CLOCK_HIGH;
/*
* The front and back porch registers are 8 bits, and pulse width
* registers are 15 bits, so reject any modes with larger periods.
*/
if ((mode->hsync_start - mode->hdisplay) > 0xff)
return MODE_HBLANK_WIDE;
if ((mode->vsync_start - mode->vdisplay) > 0xff)
return MODE_VBLANK_WIDE;
if ((mode->hsync_end - mode->hsync_start) > 0x7fff)
return MODE_HSYNC_WIDE;
if ((mode->vsync_end - mode->vsync_start) > 0x7fff)
return MODE_VSYNC_WIDE;
if ((mode->htotal - mode->hsync_end) > 0xff)
return MODE_HBLANK_WIDE;
if ((mode->vtotal - mode->vsync_end) > 0xff)
return MODE_VBLANK_WIDE;
return MODE_OK;
}
@ -1047,12 +1074,23 @@ static void ti_sn_bridge_atomic_enable(struct drm_bridge *bridge,
/*
* The SN65DSI86 only supports ASSR Display Authentication method and
* this method is enabled by default. An eDP panel must support this
* this method is enabled for eDP panels. An eDP panel must support this
* authentication method. We need to enable this method in the eDP panel
* at DisplayPort address 0x0010A prior to link training.
*
* As only ASSR is supported by SN65DSI86, for full DisplayPort displays
* we need to disable the scrambler.
*/
drm_dp_dpcd_writeb(&pdata->aux, DP_EDP_CONFIGURATION_SET,
DP_ALTERNATE_SCRAMBLER_RESET_ENABLE);
if (pdata->bridge.type == DRM_MODE_CONNECTOR_eDP) {
drm_dp_dpcd_writeb(&pdata->aux, DP_EDP_CONFIGURATION_SET,
DP_ALTERNATE_SCRAMBLER_RESET_ENABLE);
regmap_update_bits(pdata->regmap, SN_TRAINING_SETTING_REG,
SCRAMBLE_DISABLE, 0);
} else {
regmap_update_bits(pdata->regmap, SN_TRAINING_SETTING_REG,
SCRAMBLE_DISABLE, SCRAMBLE_DISABLE);
}
bpp = ti_sn_bridge_get_bpp(connector);
/* Set the DP output format (18 bpp or 24 bpp) */
@ -1122,10 +1160,33 @@ static void ti_sn_bridge_atomic_post_disable(struct drm_bridge *bridge,
pm_runtime_put_sync(pdata->dev);
}
static enum drm_connector_status ti_sn_bridge_detect(struct drm_bridge *bridge)
{
struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);
int val = 0;
pm_runtime_get_sync(pdata->dev);
regmap_read(pdata->regmap, SN_HPD_DISABLE_REG, &val);
pm_runtime_put_autosuspend(pdata->dev);
return val & HPD_DEBOUNCED_STATE ? connector_status_connected
: connector_status_disconnected;
}
static struct edid *ti_sn_bridge_get_edid(struct drm_bridge *bridge,
struct drm_connector *connector)
{
struct ti_sn65dsi86 *pdata = bridge_to_ti_sn65dsi86(bridge);
return drm_get_edid(connector, &pdata->aux.ddc);
}
static const struct drm_bridge_funcs ti_sn_bridge_funcs = {
.attach = ti_sn_bridge_attach,
.detach = ti_sn_bridge_detach,
.mode_valid = ti_sn_bridge_mode_valid,
.get_edid = ti_sn_bridge_get_edid,
.detect = ti_sn_bridge_detect,
.atomic_pre_enable = ti_sn_bridge_atomic_pre_enable,
.atomic_enable = ti_sn_bridge_atomic_enable,
.atomic_disable = ti_sn_bridge_atomic_disable,
@ -1218,6 +1279,11 @@ static int ti_sn_bridge_probe(struct auxiliary_device *adev,
pdata->bridge.funcs = &ti_sn_bridge_funcs;
pdata->bridge.of_node = np;
pdata->bridge.type = pdata->next_bridge->type == DRM_MODE_CONNECTOR_DisplayPort
? DRM_MODE_CONNECTOR_DisplayPort : DRM_MODE_CONNECTOR_eDP;
if (pdata->bridge.type == DRM_MODE_CONNECTOR_DisplayPort)
pdata->bridge.ops = DRM_BRIDGE_OP_EDID | DRM_BRIDGE_OP_DETECT;
drm_bridge_add(&pdata->bridge);

View File

@ -390,6 +390,38 @@ void drm_dp_link_train_channel_eq_delay(const struct drm_dp_aux *aux,
}
EXPORT_SYMBOL(drm_dp_link_train_channel_eq_delay);
/**
* drm_dp_phy_name() - Get the name of the given DP PHY
* @dp_phy: The DP PHY identifier
*
* Given the @dp_phy, get a user friendly name of the DP PHY, either "DPRX" or
* "LTTPR <N>", or "<INVALID DP PHY>" on errors. The returned string is always
* non-NULL and valid.
*
* Returns: Name of the DP PHY.
*/
const char *drm_dp_phy_name(enum drm_dp_phy dp_phy)
{
static const char * const phy_names[] = {
[DP_PHY_DPRX] = "DPRX",
[DP_PHY_LTTPR1] = "LTTPR 1",
[DP_PHY_LTTPR2] = "LTTPR 2",
[DP_PHY_LTTPR3] = "LTTPR 3",
[DP_PHY_LTTPR4] = "LTTPR 4",
[DP_PHY_LTTPR5] = "LTTPR 5",
[DP_PHY_LTTPR6] = "LTTPR 6",
[DP_PHY_LTTPR7] = "LTTPR 7",
[DP_PHY_LTTPR8] = "LTTPR 8",
};
if (dp_phy < 0 || dp_phy >= ARRAY_SIZE(phy_names) ||
WARN_ON(!phy_names[dp_phy]))
return "<INVALID DP PHY>";
return phy_names[dp_phy];
}
EXPORT_SYMBOL(drm_dp_phy_name);
void drm_dp_lttpr_link_train_clock_recovery_delay(void)
{
usleep_range(100, 200);

File diff suppressed because it is too large Load Diff

View File

@ -702,8 +702,12 @@ drm_atomic_helper_check_modeset(struct drm_device *dev,
if (funcs->atomic_check)
ret = funcs->atomic_check(connector, state);
if (ret)
if (ret) {
drm_dbg_atomic(dev,
"[CONNECTOR:%d:%s] driver check failed\n",
connector->base.id, connector->name);
return ret;
}
connectors_mask |= BIT(i);
}
@ -745,8 +749,12 @@ drm_atomic_helper_check_modeset(struct drm_device *dev,
if (funcs->atomic_check)
ret = funcs->atomic_check(connector, state);
if (ret)
if (ret) {
drm_dbg_atomic(dev,
"[CONNECTOR:%d:%s] driver check failed\n",
connector->base.id, connector->name);
return ret;
}
}
/*
@ -777,6 +785,45 @@ drm_atomic_helper_check_modeset(struct drm_device *dev,
}
EXPORT_SYMBOL(drm_atomic_helper_check_modeset);
/**
* drm_atomic_helper_check_wb_connector_state() - Check writeback encoder state
* @encoder: encoder state to check
* @conn_state: connector state to check
*
* Checks if the writeback connector state is valid, and returns an error if it
* isn't.
*
* RETURNS:
* Zero for success or -errno
*/
int
drm_atomic_helper_check_wb_encoder_state(struct drm_encoder *encoder,
struct drm_connector_state *conn_state)
{
struct drm_writeback_job *wb_job = conn_state->writeback_job;
struct drm_property_blob *pixel_format_blob;
struct drm_framebuffer *fb;
size_t i, nformats;
u32 *formats;
if (!wb_job || !wb_job->fb)
return 0;
pixel_format_blob = wb_job->connector->pixel_formats_blob_ptr;
nformats = pixel_format_blob->length / sizeof(u32);
formats = pixel_format_blob->data;
fb = wb_job->fb;
for (i = 0; i < nformats; i++)
if (fb->format->format == formats[i])
return 0;
drm_dbg_kms(encoder->dev, "Invalid pixel format %p4cc\n", &fb->format->format);
return -EINVAL;
}
EXPORT_SYMBOL(drm_atomic_helper_check_wb_encoder_state);
/**
* drm_atomic_helper_check_plane_state() - Check plane state for validity
* @plane_state: plane state to check
@ -1788,7 +1835,7 @@ int drm_atomic_helper_async_check(struct drm_device *dev,
struct drm_plane_state *old_plane_state = NULL;
struct drm_plane_state *new_plane_state = NULL;
const struct drm_plane_helper_funcs *funcs;
int i, n_planes = 0;
int i, ret, n_planes = 0;
for_each_new_crtc_in_state(state, crtc, crtc_state, i) {
if (drm_atomic_crtc_needs_modeset(crtc_state))
@ -1799,19 +1846,34 @@ int drm_atomic_helper_async_check(struct drm_device *dev,
n_planes++;
/* FIXME: we support only single plane updates for now */
if (n_planes != 1)
if (n_planes != 1) {
drm_dbg_atomic(dev,
"only single plane async updates are supported\n");
return -EINVAL;
}
if (!new_plane_state->crtc ||
old_plane_state->crtc != new_plane_state->crtc)
old_plane_state->crtc != new_plane_state->crtc) {
drm_dbg_atomic(dev,
"[PLANE:%d:%s] async update cannot change CRTC\n",
plane->base.id, plane->name);
return -EINVAL;
}
funcs = plane->helper_private;
if (!funcs->atomic_async_update)
if (!funcs->atomic_async_update) {
drm_dbg_atomic(dev,
"[PLANE:%d:%s] driver does not support async updates\n",
plane->base.id, plane->name);
return -EINVAL;
}
if (new_plane_state->fence)
if (new_plane_state->fence) {
drm_dbg_atomic(dev,
"[PLANE:%d:%s] missing fence for async update\n",
plane->base.id, plane->name);
return -EINVAL;
}
/*
* Don't do an async update if there is an outstanding commit modifying
@ -1826,7 +1888,12 @@ int drm_atomic_helper_async_check(struct drm_device *dev,
return -EBUSY;
}
return funcs->atomic_async_check(plane, state);
ret = funcs->atomic_async_check(plane, state);
if (ret != 0)
drm_dbg_atomic(dev,
"[PLANE:%d:%s] driver async check failed\n",
plane->base.id, plane->name);
return ret;
}
EXPORT_SYMBOL(drm_atomic_helper_async_check);

View File

@ -151,6 +151,9 @@ int drm_mode_getresources(struct drm_device *dev, void *data,
count = 0;
connector_id = u64_to_user_ptr(card_res->connector_id_ptr);
drm_for_each_connector_iter(connector, &conn_iter) {
if (connector->registration_state != DRM_CONNECTOR_REGISTERED)
continue;
/* only expose writeback connectors if userspace understands them */
if (!file_priv->writeback_connectors &&
(connector->connector_type == DRM_MODE_CONNECTOR_WRITEBACK))

View File

@ -115,7 +115,7 @@ i2c_algo_dp_aux_stop(struct i2c_adapter *adapter, bool reading)
/*
* Write a single byte to the current I2C address, the
* the I2C link must be running or this returns -EIO
* I2C link must be running or this returns -EIO
*/
static int
i2c_algo_dp_aux_put_byte(struct i2c_adapter *adapter, u8 byte)

View File

@ -310,7 +310,7 @@ static void oaktrail_crtc_dpms(struct drm_crtc *crtc, int mode)
temp & ~PIPEACONF_ENABLE, i);
REG_READ_WITH_AUX(map->conf, i);
}
/* Wait for for the pipe disable to take effect. */
/* Wait for the pipe disable to take effect. */
gma_wait_for_vblank(dev);
temp = REG_READ_WITH_AUX(map->dpll, i);

View File

@ -400,26 +400,38 @@ static const struct _sdvo_cmd_name {
#define IS_SDVOB(reg) (reg == SDVOB)
#define SDVO_NAME(svdo) (IS_SDVOB((svdo)->sdvo_reg) ? "SDVOB" : "SDVOC")
static void psb_intel_sdvo_debug_write(struct psb_intel_sdvo *psb_intel_sdvo, u8 cmd,
const void *args, int args_len)
static void psb_intel_sdvo_debug_write(struct psb_intel_sdvo *psb_intel_sdvo,
u8 cmd, const void *args, int args_len)
{
int i;
struct drm_device *dev = psb_intel_sdvo->base.base.dev;
int i, pos = 0;
char buffer[73];
#define BUF_PRINT(args...) \
pos += snprintf(buffer + pos, max_t(int, sizeof(buffer) - pos, 0), args)
for (i = 0; i < args_len; i++) {
BUF_PRINT("%02X ", ((u8 *)args)[i]);
}
for (; i < 8; i++) {
BUF_PRINT(" ");
}
DRM_DEBUG_KMS("%s: W: %02X ",
SDVO_NAME(psb_intel_sdvo), cmd);
for (i = 0; i < args_len; i++)
DRM_DEBUG_KMS("%02X ", ((u8 *)args)[i]);
for (; i < 8; i++)
DRM_DEBUG_KMS(" ");
for (i = 0; i < ARRAY_SIZE(sdvo_cmd_names); i++) {
if (cmd == sdvo_cmd_names[i].cmd) {
DRM_DEBUG_KMS("(%s)", sdvo_cmd_names[i].name);
BUF_PRINT("(%s)", sdvo_cmd_names[i].name);
break;
}
}
if (i == ARRAY_SIZE(sdvo_cmd_names))
DRM_DEBUG_KMS("(%02X)", cmd);
DRM_DEBUG_KMS("\n");
BUF_PRINT("(%02X)", cmd);
drm_WARN_ON(dev, pos >= sizeof(buffer) - 1);
#undef BUF_PRINT
DRM_DEBUG_KMS("%s: W: %02X %s\n", SDVO_NAME(psb_intel_sdvo), cmd, buffer);
}
static const char *cmd_status_names[] = {
@ -490,13 +502,13 @@ static bool psb_intel_sdvo_write_cmd(struct psb_intel_sdvo *psb_intel_sdvo, u8 c
}
static bool psb_intel_sdvo_read_response(struct psb_intel_sdvo *psb_intel_sdvo,
void *response, int response_len)
void *response, int response_len)
{
struct drm_device *dev = psb_intel_sdvo->base.base.dev;
char buffer[73];
int i, pos = 0;
u8 retry = 5;
u8 status;
int i;
DRM_DEBUG_KMS("%s: R: ", SDVO_NAME(psb_intel_sdvo));
/*
* The documentation states that all commands will be
@ -520,10 +532,13 @@ static bool psb_intel_sdvo_read_response(struct psb_intel_sdvo *psb_intel_sdvo,
goto log_fail;
}
#define BUF_PRINT(args...) \
pos += snprintf(buffer + pos, max_t(int, sizeof(buffer) - pos, 0), args)
if (status <= SDVO_CMD_STATUS_SCALING_NOT_SUPP)
DRM_DEBUG_KMS("(%s)", cmd_status_names[status]);
BUF_PRINT("(%s)", cmd_status_names[status]);
else
DRM_DEBUG_KMS("(??? %d)", status);
BUF_PRINT("(??? %d)", status);
if (status != SDVO_CMD_STATUS_SUCCESS)
goto log_fail;
@ -534,13 +549,18 @@ static bool psb_intel_sdvo_read_response(struct psb_intel_sdvo *psb_intel_sdvo,
SDVO_I2C_RETURN_0 + i,
&((u8 *)response)[i]))
goto log_fail;
DRM_DEBUG_KMS(" %02X", ((u8 *)response)[i]);
BUF_PRINT(" %02X", ((u8 *)response)[i]);
}
DRM_DEBUG_KMS("\n");
drm_WARN_ON(dev, pos >= sizeof(buffer) - 1);
#undef BUF_PRINT
DRM_DEBUG_KMS("%s: R: %s\n", SDVO_NAME(psb_intel_sdvo), buffer);
return true;
log_fail:
DRM_DEBUG_KMS("... failed\n");
DRM_DEBUG_KMS("%s: R: ... failed %s\n",
SDVO_NAME(psb_intel_sdvo), buffer);
return false;
}

View File

@ -7531,6 +7531,7 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
intel_atomic_commit_fence_wait(state);
drm_atomic_helper_wait_for_dependencies(&state->base);
drm_dp_mst_atomic_wait_for_dependencies(&state->base);
if (state->modeset)
wakeref = intel_display_power_get(dev_priv, POWER_DOMAIN_MODESET);
@ -8599,6 +8600,10 @@ out:
return ret;
}
static const struct drm_mode_config_helper_funcs intel_mode_config_funcs = {
.atomic_commit_setup = drm_dp_mst_atomic_setup_commit,
};
static void intel_mode_config_init(struct drm_i915_private *i915)
{
struct drm_mode_config *mode_config = &i915->drm.mode_config;
@ -8613,6 +8618,7 @@ static void intel_mode_config_init(struct drm_i915_private *i915)
mode_config->prefer_shadow = 1;
mode_config->funcs = &intel_mode_funcs;
mode_config->helper_private = &intel_mode_config_funcs;
mode_config->async_page_flip = HAS_ASYNC_FLIPS(i915);

View File

@ -4992,12 +4992,21 @@ static int intel_dp_connector_atomic_check(struct drm_connector *conn,
{
struct drm_i915_private *dev_priv = to_i915(conn->dev);
struct intel_atomic_state *state = to_intel_atomic_state(_state);
struct drm_connector_state *conn_state = drm_atomic_get_new_connector_state(_state, conn);
struct intel_connector *intel_conn = to_intel_connector(conn);
struct intel_dp *intel_dp = enc_to_intel_dp(intel_conn->encoder);
int ret;
ret = intel_digital_connector_atomic_check(conn, &state->base);
if (ret)
return ret;
if (intel_dp_mst_source_support(intel_dp)) {
ret = drm_dp_mst_root_conn_atomic_check(conn_state, &intel_dp->mst_mgr);
if (ret)
return ret;
}
/*
* We don't enable port sync on BDW due to missing w/as and
* due to not having adjusted the modeset sequence appropriately.

View File

@ -52,6 +52,7 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
struct drm_atomic_state *state = crtc_state->uapi.state;
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
struct intel_dp *intel_dp = &intel_mst->primary->dp;
struct drm_dp_mst_topology_state *mst_state;
struct intel_connector *connector =
to_intel_connector(conn_state->connector);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
@ -60,22 +61,28 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
bool constant_n = drm_dp_has_quirk(&intel_dp->desc, DP_DPCD_QUIRK_CONSTANT_N);
int bpp, slots = -EINVAL;
mst_state = drm_atomic_get_mst_topology_state(state, &intel_dp->mst_mgr);
if (IS_ERR(mst_state))
return PTR_ERR(mst_state);
crtc_state->lane_count = limits->max_lane_count;
crtc_state->port_clock = limits->max_rate;
// TODO: Handle pbn_div changes by adding a new MST helper
if (!mst_state->pbn_div) {
mst_state->pbn_div = drm_dp_get_vc_payload_bw(&intel_dp->mst_mgr,
limits->max_rate,
limits->max_lane_count);
}
for (bpp = limits->max_bpp; bpp >= limits->min_bpp; bpp -= 2 * 3) {
crtc_state->pipe_bpp = bpp;
crtc_state->pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock,
crtc_state->pipe_bpp,
false);
slots = drm_dp_atomic_find_vcpi_slots(state, &intel_dp->mst_mgr,
connector->port,
crtc_state->pbn,
drm_dp_get_vc_payload_bw(&intel_dp->mst_mgr,
crtc_state->port_clock,
crtc_state->lane_count));
slots = drm_dp_atomic_find_time_slots(state, &intel_dp->mst_mgr,
connector->port, crtc_state->pbn);
if (slots == -EDEADLK)
return slots;
if (slots >= 0)
@ -308,14 +315,8 @@ intel_dp_mst_atomic_check(struct drm_connector *connector,
struct drm_atomic_state *_state)
{
struct intel_atomic_state *state = to_intel_atomic_state(_state);
struct drm_connector_state *new_conn_state =
drm_atomic_get_new_connector_state(&state->base, connector);
struct drm_connector_state *old_conn_state =
drm_atomic_get_old_connector_state(&state->base, connector);
struct intel_connector *intel_connector =
to_intel_connector(connector);
struct drm_crtc *new_crtc = new_conn_state->crtc;
struct drm_dp_mst_topology_mgr *mgr;
int ret;
ret = intel_digital_connector_atomic_check(connector, &state->base);
@ -326,28 +327,9 @@ intel_dp_mst_atomic_check(struct drm_connector *connector,
if (ret)
return ret;
if (!old_conn_state->crtc)
return 0;
/* We only want to free VCPI if this state disables the CRTC on this
* connector
*/
if (new_crtc) {
struct intel_crtc *crtc = to_intel_crtc(new_crtc);
struct intel_crtc_state *crtc_state =
intel_atomic_get_new_crtc_state(state, crtc);
if (!crtc_state ||
!drm_atomic_crtc_needs_modeset(&crtc_state->uapi) ||
crtc_state->uapi.enable)
return 0;
}
mgr = &enc_to_mst(to_intel_encoder(old_conn_state->best_encoder))->primary->dp.mst_mgr;
ret = drm_dp_atomic_release_vcpi_slots(&state->base, mgr,
intel_connector->port);
return ret;
return drm_dp_atomic_release_time_slots(&state->base,
&intel_connector->mst_port->mst_mgr,
intel_connector->port);
}
static void clear_act_sent(struct intel_encoder *encoder,
@ -383,21 +365,17 @@ static void intel_mst_disable_dp(struct intel_atomic_state *state,
struct intel_dp *intel_dp = &dig_port->dp;
struct intel_connector *connector =
to_intel_connector(old_conn_state->connector);
struct drm_dp_mst_topology_state *mst_state =
drm_atomic_get_mst_topology_state(&state->base, &intel_dp->mst_mgr);
struct drm_i915_private *i915 = to_i915(connector->base.dev);
int start_slot = intel_dp_is_uhbr(old_crtc_state) ? 0 : 1;
int ret;
drm_dbg_kms(&i915->drm, "active links %d\n",
intel_dp->active_mst_links);
intel_hdcp_disable(intel_mst->connector);
drm_dp_mst_reset_vcpi_slots(&intel_dp->mst_mgr, connector->port);
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot);
if (ret) {
drm_dbg_kms(&i915->drm, "failed to update payload %d\n", ret);
}
drm_dp_remove_payload(&intel_dp->mst_mgr, mst_state,
drm_atomic_get_mst_payload_state(mst_state, connector->port));
intel_audio_codec_disable(encoder, old_crtc_state, old_conn_state);
}
@ -425,8 +403,6 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
intel_disable_transcoder(old_crtc_state);
drm_dp_update_payload_part2(&intel_dp->mst_mgr);
clear_act_sent(encoder, old_crtc_state);
intel_de_rmw(dev_priv, TRANS_DDI_FUNC_CTL(old_crtc_state->cpu_transcoder),
@ -434,8 +410,6 @@ static void intel_mst_post_disable_dp(struct intel_atomic_state *state,
wait_for_act_sent(encoder, old_crtc_state);
drm_dp_mst_deallocate_vcpi(&intel_dp->mst_mgr, connector->port);
intel_ddi_disable_transcoder_func(old_crtc_state);
if (DISPLAY_VER(dev_priv) >= 9)
@ -502,7 +476,8 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state,
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_connector *connector =
to_intel_connector(conn_state->connector);
int start_slot = intel_dp_is_uhbr(pipe_config) ? 0 : 1;
struct drm_dp_mst_topology_state *mst_state =
drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr);
int ret;
bool first_mst_stream;
@ -528,16 +503,13 @@ static void intel_mst_pre_enable_dp(struct intel_atomic_state *state,
dig_port->base.pre_enable(state, &dig_port->base,
pipe_config, NULL);
ret = drm_dp_mst_allocate_vcpi(&intel_dp->mst_mgr,
connector->port,
pipe_config->pbn,
pipe_config->dp_m_n.tu);
if (!ret)
drm_err(&dev_priv->drm, "failed to allocate vcpi\n");
intel_dp->active_mst_links++;
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr, start_slot);
ret = drm_dp_add_payload_part1(&intel_dp->mst_mgr, mst_state,
drm_atomic_get_mst_payload_state(mst_state, connector->port));
if (ret < 0)
drm_err(&dev_priv->drm, "Failed to create MST payload for %s: %d\n",
connector->base.name, ret);
/*
* Before Gen 12 this is not done as part of
@ -560,7 +532,10 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state,
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(encoder);
struct intel_digital_port *dig_port = intel_mst->primary;
struct intel_dp *intel_dp = &dig_port->dp;
struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct drm_dp_mst_topology_state *mst_state =
drm_atomic_get_new_mst_topology_state(&state->base, &intel_dp->mst_mgr);
enum transcoder trans = pipe_config->cpu_transcoder;
drm_WARN_ON(&dev_priv->drm, pipe_config->has_pch_encoder);
@ -588,7 +563,8 @@ static void intel_mst_enable_dp(struct intel_atomic_state *state,
wait_for_act_sent(encoder, pipe_config);
drm_dp_update_payload_part2(&intel_dp->mst_mgr);
drm_dp_add_payload_part2(&intel_dp->mst_mgr, &state->base,
drm_atomic_get_mst_payload_state(mst_state, connector->port));
if (DISPLAY_VER(dev_priv) >= 12 && pipe_config->fec_enable)
intel_de_rmw(dev_priv, CHICKEN_TRANS(trans), 0,
@ -972,8 +948,6 @@ intel_dp_mst_encoder_init(struct intel_digital_port *dig_port, int conn_base_id)
struct intel_dp *intel_dp = &dig_port->dp;
enum port port = dig_port->base.port;
int ret;
int max_source_rate =
intel_dp->source_rates[intel_dp->num_source_rates - 1];
if (!HAS_DP_MST(i915) || intel_dp_is_edp(intel_dp))
return 0;
@ -989,10 +963,7 @@ intel_dp_mst_encoder_init(struct intel_digital_port *dig_port, int conn_base_id)
/* create encoders */
intel_dp_create_fake_mst_encoders(dig_port);
ret = drm_dp_mst_topology_mgr_init(&intel_dp->mst_mgr, &i915->drm,
&intel_dp->aux, 16, 3,
dig_port->max_lanes,
max_source_rate,
conn_base_id);
&intel_dp->aux, 16, 3, conn_base_id);
if (ret) {
intel_dp->mst_mgr.cbs = NULL;
return ret;

View File

@ -30,8 +30,30 @@
static int intel_conn_to_vcpi(struct intel_connector *connector)
{
struct drm_dp_mst_topology_mgr *mgr;
struct drm_dp_mst_atomic_payload *payload;
struct drm_dp_mst_topology_state *mst_state;
int vcpi = 0;
/* For HDMI this is forced to be 0x0. For DP SST also this is 0x0. */
return connector->port ? connector->port->vcpi.vcpi : 0;
if (!connector->port)
return 0;
mgr = connector->port->mgr;
drm_modeset_lock(&mgr->base.lock, NULL);
mst_state = to_drm_dp_mst_topology_state(mgr->base.state);
payload = drm_atomic_get_mst_payload_state(mst_state, connector->port);
if (drm_WARN_ON(mgr->dev, !payload))
goto out;
vcpi = payload->vcpi;
if (drm_WARN_ON(mgr->dev, vcpi < 0)) {
vcpi = 0;
goto out;
}
out:
drm_modeset_unlock(&mgr->base.lock);
return vcpi;
}
/*

View File

@ -361,7 +361,6 @@ static bool i915_ttm_eviction_valuable(struct ttm_buffer_object *bo,
const struct ttm_place *place)
{
struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
struct ttm_resource *res = bo->resource;
if (!obj)
return false;
@ -378,45 +377,7 @@ static bool i915_ttm_eviction_valuable(struct ttm_buffer_object *bo,
if (!i915_gem_object_evictable(obj))
return false;
switch (res->mem_type) {
case I915_PL_LMEM0: {
struct ttm_resource_manager *man =
ttm_manager_type(bo->bdev, res->mem_type);
struct i915_ttm_buddy_resource *bman_res =
to_ttm_buddy_resource(res);
struct drm_buddy *mm = bman_res->mm;
struct drm_buddy_block *block;
if (!place->fpfn && !place->lpfn)
return true;
GEM_BUG_ON(!place->lpfn);
/*
* If we just want something mappable then we can quickly check
* if the current victim resource is using any of the CPU
* visible portion.
*/
if (!place->fpfn &&
place->lpfn == i915_ttm_buddy_man_visible_size(man))
return bman_res->used_visible_size > 0;
/* Real range allocation */
list_for_each_entry(block, &bman_res->blocks, link) {
unsigned long fpfn =
drm_buddy_block_offset(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
(drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
if (place->fpfn < lpfn && place->lpfn > fpfn)
return true;
}
return false;
} default:
break;
}
return true;
return ttm_bo_eviction_valuable(bo, place);
}
static void i915_ttm_evict_flags(struct ttm_buffer_object *bo,

View File

@ -173,6 +173,77 @@ static void i915_ttm_buddy_man_free(struct ttm_resource_manager *man,
kfree(bman_res);
}
static bool i915_ttm_buddy_man_intersects(struct ttm_resource_manager *man,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res);
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
struct drm_buddy *mm = &bman->mm;
struct drm_buddy_block *block;
if (!place->fpfn && !place->lpfn)
return true;
GEM_BUG_ON(!place->lpfn);
/*
* If we just want something mappable then we can quickly check
* if the current victim resource is using any of the CPU
* visible portion.
*/
if (!place->fpfn &&
place->lpfn == i915_ttm_buddy_man_visible_size(man))
return bman_res->used_visible_size > 0;
/* Check each drm buddy block individually */
list_for_each_entry(block, &bman_res->blocks, link) {
unsigned long fpfn =
drm_buddy_block_offset(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
(drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
if (place->fpfn < lpfn && place->lpfn > fpfn)
return true;
}
return false;
}
static bool i915_ttm_buddy_man_compatible(struct ttm_resource_manager *man,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res);
struct i915_ttm_buddy_manager *bman = to_buddy_manager(man);
struct drm_buddy *mm = &bman->mm;
struct drm_buddy_block *block;
if (!place->fpfn && !place->lpfn)
return true;
GEM_BUG_ON(!place->lpfn);
if (!place->fpfn &&
place->lpfn == i915_ttm_buddy_man_visible_size(man))
return bman_res->used_visible_size == res->num_pages;
/* Check each drm buddy block individually */
list_for_each_entry(block, &bman_res->blocks, link) {
unsigned long fpfn =
drm_buddy_block_offset(block) >> PAGE_SHIFT;
unsigned long lpfn = fpfn +
(drm_buddy_block_size(mm, block) >> PAGE_SHIFT);
if (fpfn < place->fpfn || lpfn > place->lpfn)
return false;
}
return true;
}
static void i915_ttm_buddy_man_debug(struct ttm_resource_manager *man,
struct drm_printer *printer)
{
@ -200,6 +271,8 @@ static void i915_ttm_buddy_man_debug(struct ttm_resource_manager *man,
static const struct ttm_resource_manager_func i915_ttm_buddy_manager_func = {
.alloc = i915_ttm_buddy_man_alloc,
.free = i915_ttm_buddy_man_free,
.intersects = i915_ttm_buddy_man_intersects,
.compatible = i915_ttm_buddy_man_compatible,
.debug = i915_ttm_buddy_man_debug,
};

View File

@ -21,6 +21,15 @@ config DRM_MEDIATEK
This driver provides kernel mode setting and
buffer management to userspace.
config DRM_MEDIATEK_DP
tristate "DRM DPTX Support for MediaTek SoCs"
depends on DRM_MEDIATEK
select PHY_MTK_DP
select DRM_DISPLAY_HELPER
select DRM_DISPLAY_DP_HELPER
help
DRM/KMS Display Port driver for MediaTek SoCs.
config DRM_MEDIATEK_HDMI
tristate "DRM HDMI Support for Mediatek SoCs"
depends on DRM_MEDIATEK

View File

@ -23,3 +23,5 @@ mediatek-drm-hdmi-objs := mtk_cec.o \
mtk_hdmi_ddc.o
obj-$(CONFIG_DRM_MEDIATEK_HDMI) += mediatek-drm-hdmi.o
obj-$(CONFIG_DRM_MEDIATEK_DP) += mtk_dp.o

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,356 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (c) 2019-2022 MediaTek Inc.
* Copyright (c) 2022 BayLibre
*/
#ifndef _MTK_DP_REG_H_
#define _MTK_DP_REG_H_
#define SEC_OFFSET 0x4000
#define MTK_DP_HPD_DISCONNECT BIT(1)
#define MTK_DP_HPD_CONNECT BIT(2)
#define MTK_DP_HPD_INTERRUPT BIT(3)
/* offset: 0x0 */
#define DP_PHY_GLB_BIAS_GEN_00 0x0
#define RG_XTP_GLB_BIAS_INTR_CTRL GENMASK(20, 16)
#define DP_PHY_GLB_DPAUX_TX 0x8
#define RG_CKM_PT0_CKTX_IMPSEL GENMASK(23, 20)
#define MTK_DP_0034 0x34
#define DA_XTP_GLB_CKDET_EN_FORCE_VAL BIT(15)
#define DA_XTP_GLB_CKDET_EN_FORCE_EN BIT(14)
#define DA_CKM_INTCKTX_EN_FORCE_VAL BIT(13)
#define DA_CKM_INTCKTX_EN_FORCE_EN BIT(12)
#define DA_CKM_CKTX0_EN_FORCE_VAL BIT(11)
#define DA_CKM_CKTX0_EN_FORCE_EN BIT(10)
#define DA_CKM_XTAL_CK_FORCE_VAL BIT(9)
#define DA_CKM_XTAL_CK_FORCE_EN BIT(8)
#define DA_CKM_BIAS_LPF_EN_FORCE_VAL BIT(7)
#define DA_CKM_BIAS_LPF_EN_FORCE_EN BIT(6)
#define DA_CKM_BIAS_EN_FORCE_VAL BIT(5)
#define DA_CKM_BIAS_EN_FORCE_EN BIT(4)
#define DA_XTP_GLB_AVD10_ON_FORCE_VAL BIT(3)
#define DA_XTP_GLB_AVD10_ON_FORCE BIT(2)
#define DA_XTP_GLB_LDO_EN_FORCE_VAL BIT(1)
#define DA_XTP_GLB_LDO_EN_FORCE_EN BIT(0)
#define DP_PHY_LANE_TX_0 0x104
#define RG_XTP_LN0_TX_IMPSEL_PMOS GENMASK(15, 12)
#define RG_XTP_LN0_TX_IMPSEL_NMOS GENMASK(19, 16)
#define DP_PHY_LANE_TX_1 0x204
#define RG_XTP_LN1_TX_IMPSEL_PMOS GENMASK(15, 12)
#define RG_XTP_LN1_TX_IMPSEL_NMOS GENMASK(19, 16)
#define DP_PHY_LANE_TX_2 0x304
#define RG_XTP_LN2_TX_IMPSEL_PMOS GENMASK(15, 12)
#define RG_XTP_LN2_TX_IMPSEL_NMOS GENMASK(19, 16)
#define DP_PHY_LANE_TX_3 0x404
#define RG_XTP_LN3_TX_IMPSEL_PMOS GENMASK(15, 12)
#define RG_XTP_LN3_TX_IMPSEL_NMOS GENMASK(19, 16)
#define MTK_DP_1040 0x1040
#define RG_DPAUX_RX_VALID_DEGLITCH_EN BIT(2)
#define RG_XTP_GLB_CKDET_EN BIT(1)
#define RG_DPAUX_RX_EN BIT(0)
/* offset: TOP_OFFSET (0x2000) */
#define MTK_DP_TOP_PWR_STATE 0x2000
#define DP_PWR_STATE_MASK GENMASK(1, 0)
#define DP_PWR_STATE_BANDGAP BIT(0)
#define DP_PWR_STATE_BANDGAP_TPLL BIT(1)
#define DP_PWR_STATE_BANDGAP_TPLL_LANE GENMASK(1, 0)
#define MTK_DP_TOP_SWING_EMP 0x2004
#define DP_TX0_VOLT_SWING_MASK GENMASK(1, 0)
#define DP_TX0_VOLT_SWING_SHIFT 0
#define DP_TX0_PRE_EMPH_MASK GENMASK(3, 2)
#define DP_TX0_PRE_EMPH_SHIFT 2
#define DP_TX1_VOLT_SWING_MASK GENMASK(9, 8)
#define DP_TX1_VOLT_SWING_SHIFT 8
#define DP_TX1_PRE_EMPH_MASK GENMASK(11, 10)
#define DP_TX2_VOLT_SWING_MASK GENMASK(17, 16)
#define DP_TX2_PRE_EMPH_MASK GENMASK(19, 18)
#define DP_TX3_VOLT_SWING_MASK GENMASK(25, 24)
#define DP_TX3_PRE_EMPH_MASK GENMASK(27, 26)
#define MTK_DP_TOP_RESET_AND_PROBE 0x2020
#define SW_RST_B_PHYD BIT(4)
#define MTK_DP_TOP_IRQ_MASK 0x202c
#define IRQ_MASK_AUX_TOP_IRQ BIT(2)
#define MTK_DP_TOP_MEM_PD 0x2038
#define MEM_ISO_EN BIT(0)
#define FUSE_SEL BIT(2)
/* offset: ENC0_OFFSET (0x3000) */
#define MTK_DP_ENC0_P0_3000 0x3000
#define LANE_NUM_DP_ENC0_P0_MASK GENMASK(1, 0)
#define VIDEO_MUTE_SW_DP_ENC0_P0 BIT(2)
#define VIDEO_MUTE_SEL_DP_ENC0_P0 BIT(3)
#define ENHANCED_FRAME_EN_DP_ENC0_P0 BIT(4)
#define MTK_DP_ENC0_P0_3004 0x3004
#define VIDEO_M_CODE_SEL_DP_ENC0_P0_MASK BIT(8)
#define DP_TX_ENCODER_4P_RESET_SW_DP_ENC0_P0 BIT(9)
#define MTK_DP_ENC0_P0_3010 0x3010
#define HTOTAL_SW_DP_ENC0_P0_MASK GENMASK(15, 0)
#define MTK_DP_ENC0_P0_3014 0x3014
#define VTOTAL_SW_DP_ENC0_P0_MASK GENMASK(15, 0)
#define MTK_DP_ENC0_P0_3018 0x3018
#define HSTART_SW_DP_ENC0_P0_MASK GENMASK(15, 0)
#define MTK_DP_ENC0_P0_301C 0x301c
#define VSTART_SW_DP_ENC0_P0_MASK GENMASK(15, 0)
#define MTK_DP_ENC0_P0_3020 0x3020
#define HWIDTH_SW_DP_ENC0_P0_MASK GENMASK(15, 0)
#define MTK_DP_ENC0_P0_3024 0x3024
#define VHEIGHT_SW_DP_ENC0_P0_MASK GENMASK(15, 0)
#define MTK_DP_ENC0_P0_3028 0x3028
#define HSW_SW_DP_ENC0_P0_MASK GENMASK(14, 0)
#define HSP_SW_DP_ENC0_P0_MASK BIT(15)
#define MTK_DP_ENC0_P0_302C 0x302c
#define VSW_SW_DP_ENC0_P0_MASK GENMASK(14, 0)
#define VSP_SW_DP_ENC0_P0_MASK BIT(15)
#define MTK_DP_ENC0_P0_3030 0x3030
#define HTOTAL_SEL_DP_ENC0_P0 BIT(0)
#define VTOTAL_SEL_DP_ENC0_P0 BIT(1)
#define HSTART_SEL_DP_ENC0_P0 BIT(2)
#define VSTART_SEL_DP_ENC0_P0 BIT(3)
#define HWIDTH_SEL_DP_ENC0_P0 BIT(4)
#define VHEIGHT_SEL_DP_ENC0_P0 BIT(5)
#define HSP_SEL_DP_ENC0_P0 BIT(6)
#define HSW_SEL_DP_ENC0_P0 BIT(7)
#define VSP_SEL_DP_ENC0_P0 BIT(8)
#define VSW_SEL_DP_ENC0_P0 BIT(9)
#define VBID_AUDIO_MUTE_FLAG_SW_DP_ENC0_P0 BIT(11)
#define VBID_AUDIO_MUTE_FLAG_SEL_DP_ENC0_P0 BIT(12)
#define MTK_DP_ENC0_P0_3034 0x3034
#define MTK_DP_ENC0_P0_3038 0x3038
#define VIDEO_SOURCE_SEL_DP_ENC0_P0_MASK BIT(11)
#define MTK_DP_ENC0_P0_303C 0x303c
#define SRAM_START_READ_THRD_DP_ENC0_P0_MASK GENMASK(5, 0)
#define VIDEO_COLOR_DEPTH_DP_ENC0_P0_MASK GENMASK(10, 8)
#define VIDEO_COLOR_DEPTH_DP_ENC0_P0_16BIT (0 << 8)
#define VIDEO_COLOR_DEPTH_DP_ENC0_P0_12BIT (1 << 8)
#define VIDEO_COLOR_DEPTH_DP_ENC0_P0_10BIT (2 << 8)
#define VIDEO_COLOR_DEPTH_DP_ENC0_P0_8BIT (3 << 8)
#define VIDEO_COLOR_DEPTH_DP_ENC0_P0_6BIT (4 << 8)
#define PIXEL_ENCODE_FORMAT_DP_ENC0_P0_MASK GENMASK(14, 12)
#define PIXEL_ENCODE_FORMAT_DP_ENC0_P0_RGB (0 << 12)
#define PIXEL_ENCODE_FORMAT_DP_ENC0_P0_YCBCR422 (1 << 12)
#define PIXEL_ENCODE_FORMAT_DP_ENC0_P0_YCBCR420 (2 << 12)
#define VIDEO_MN_GEN_EN_DP_ENC0_P0 BIT(15)
#define MTK_DP_ENC0_P0_3040 0x3040
#define SDP_DOWN_CNT_DP_ENC0_P0_VAL 0x20
#define SDP_DOWN_CNT_INIT_DP_ENC0_P0_MASK GENMASK(11, 0)
#define MTK_DP_ENC0_P0_304C 0x304c
#define VBID_VIDEO_MUTE_DP_ENC0_P0_MASK BIT(2)
#define SDP_VSYNC_RISING_MASK_DP_ENC0_P0_MASK BIT(8)
#define MTK_DP_ENC0_P0_3064 0x3064
#define HDE_NUM_LAST_DP_ENC0_P0_MASK GENMASK(15, 0)
#define MTK_DP_ENC0_P0_3088 0x3088
#define AU_EN_DP_ENC0_P0 BIT(6)
#define AUDIO_8CH_EN_DP_ENC0_P0_MASK BIT(7)
#define AUDIO_8CH_SEL_DP_ENC0_P0_MASK BIT(8)
#define AUDIO_2CH_EN_DP_ENC0_P0_MASK BIT(14)
#define AUDIO_2CH_SEL_DP_ENC0_P0_MASK BIT(15)
#define MTK_DP_ENC0_P0_308C 0x308c
#define CH_STATUS_0_DP_ENC0_P0_MASK GENMASK(15, 0)
#define MTK_DP_ENC0_P0_3090 0x3090
#define CH_STATUS_1_DP_ENC0_P0_MASK GENMASK(15, 0)
#define MTK_DP_ENC0_P0_3094 0x3094
#define CH_STATUS_2_DP_ENC0_P0_MASK GENMASK(7, 0)
#define MTK_DP_ENC0_P0_30A0 0x30a0
#define DP_ENC0_30A0_MASK (BIT(7) | BIT(8) | BIT(12))
#define MTK_DP_ENC0_P0_30A4 0x30a4
#define AU_TS_CFG_DP_ENC0_P0_MASK GENMASK(7, 0)
#define MTK_DP_ENC0_P0_30A8 0x30a8
#define MTK_DP_ENC0_P0_30BC 0x30bc
#define ISRC_CONT_DP_ENC0_P0 BIT(0)
#define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_MASK GENMASK(10, 8)
#define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_MUL_2 (1 << 8)
#define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_MUL_4 (2 << 8)
#define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_MUL_8 (3 << 8)
#define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_DIV_2 (5 << 8)
#define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_DIV_4 (6 << 8)
#define AUDIO_M_CODE_MULT_DIV_SEL_DP_ENC0_P0_DIV_8 (7 << 8)
#define MTK_DP_ENC0_P0_30D8 0x30d8
#define MTK_DP_ENC0_P0_312C 0x312c
#define ASP_HB2_DP_ENC0_P0_MASK GENMASK(7, 0)
#define ASP_HB3_DP_ENC0_P0_MASK GENMASK(15, 8)
#define MTK_DP_ENC0_P0_3130 0x3130
#define MTK_DP_ENC0_P0_3138 0x3138
#define MTK_DP_ENC0_P0_3154 0x3154
#define PGEN_HTOTAL_DP_ENC0_P0_MASK GENMASK(13, 0)
#define MTK_DP_ENC0_P0_3158 0x3158
#define PGEN_HSYNC_RISING_DP_ENC0_P0_MASK GENMASK(13, 0)
#define MTK_DP_ENC0_P0_315C 0x315c
#define PGEN_HSYNC_PULSE_WIDTH_DP_ENC0_P0_MASK GENMASK(13, 0)
#define MTK_DP_ENC0_P0_3160 0x3160
#define PGEN_HFDE_START_DP_ENC0_P0_MASK GENMASK(13, 0)
#define MTK_DP_ENC0_P0_3164 0x3164
#define PGEN_HFDE_ACTIVE_WIDTH_DP_ENC0_P0_MASK GENMASK(13, 0)
#define MTK_DP_ENC0_P0_3168 0x3168
#define PGEN_VTOTAL_DP_ENC0_P0_MASK GENMASK(12, 0)
#define MTK_DP_ENC0_P0_316C 0x316c
#define PGEN_VSYNC_RISING_DP_ENC0_P0_MASK GENMASK(12, 0)
#define MTK_DP_ENC0_P0_3170 0x3170
#define PGEN_VSYNC_PULSE_WIDTH_DP_ENC0_P0_MASK GENMASK(12, 0)
#define MTK_DP_ENC0_P0_3174 0x3174
#define PGEN_VFDE_START_DP_ENC0_P0_MASK GENMASK(12, 0)
#define MTK_DP_ENC0_P0_3178 0x3178
#define PGEN_VFDE_ACTIVE_WIDTH_DP_ENC0_P0_MASK GENMASK(12, 0)
#define MTK_DP_ENC0_P0_31B0 0x31b0
#define PGEN_PATTERN_SEL_VAL 4
#define PGEN_PATTERN_SEL_MASK GENMASK(6, 4)
#define MTK_DP_ENC0_P0_31EC 0x31ec
#define AUDIO_CH_SRC_SEL_DP_ENC0_P0 BIT(4)
#define ISRC1_HB3_DP_ENC0_P0_MASK GENMASK(15, 8)
/* offset: ENC1_OFFSET (0x3200) */
#define MTK_DP_ENC1_P0_3200 0x3200
#define MTK_DP_ENC1_P0_3280 0x3280
#define SDP_PACKET_TYPE_DP_ENC1_P0_MASK GENMASK(4, 0)
#define SDP_PACKET_W_DP_ENC1_P0 BIT(5)
#define SDP_PACKET_W_DP_ENC1_P0_MASK BIT(5)
#define MTK_DP_ENC1_P0_328C 0x328c
#define VSC_DATA_RDY_VESA_DP_ENC1_P0_MASK BIT(7)
#define MTK_DP_ENC1_P0_3300 0x3300
#define VIDEO_AFIFO_RDY_SEL_DP_ENC1_P0_VAL 2
#define VIDEO_AFIFO_RDY_SEL_DP_ENC1_P0_MASK GENMASK(9, 8)
#define MTK_DP_ENC1_P0_3304 0x3304
#define AU_PRTY_REGEN_DP_ENC1_P0_MASK BIT(8)
#define AU_CH_STS_REGEN_DP_ENC1_P0_MASK BIT(9)
#define AUDIO_SAMPLE_PRSENT_REGEN_DP_ENC1_P0_MASK BIT(12)
#define MTK_DP_ENC1_P0_3324 0x3324
#define AUDIO_SOURCE_MUX_DP_ENC1_P0_MASK GENMASK(9, 8)
#define AUDIO_SOURCE_MUX_DP_ENC1_P0_DPRX 0
#define MTK_DP_ENC1_P0_3364 0x3364
#define SDP_DOWN_CNT_IN_HBLANK_DP_ENC1_P0_VAL 0x20
#define SDP_DOWN_CNT_INIT_IN_HBLANK_DP_ENC1_P0_MASK GENMASK(11, 0)
#define FIFO_READ_START_POINT_DP_ENC1_P0_VAL 4
#define FIFO_READ_START_POINT_DP_ENC1_P0_MASK GENMASK(15, 12)
#define MTK_DP_ENC1_P0_3368 0x3368
#define VIDEO_SRAM_FIFO_CNT_RESET_SEL_DP_ENC1_P0 BIT(0)
#define VIDEO_STABLE_CNT_THRD_DP_ENC1_P0 BIT(4)
#define SDP_DP13_EN_DP_ENC1_P0 BIT(8)
#define BS2BS_MODE_DP_ENC1_P0 BIT(12)
#define BS2BS_MODE_DP_ENC1_P0_MASK GENMASK(13, 12)
#define BS2BS_MODE_DP_ENC1_P0_VAL 1
#define DP_ENC1_P0_3368_VAL (VIDEO_SRAM_FIFO_CNT_RESET_SEL_DP_ENC1_P0 | \
VIDEO_STABLE_CNT_THRD_DP_ENC1_P0 | \
SDP_DP13_EN_DP_ENC1_P0 | \
BS2BS_MODE_DP_ENC1_P0)
#define MTK_DP_ENC1_P0_33F4 0x33f4
#define DP_ENC_DUMMY_RW_1_AUDIO_RST_EN BIT(0)
#define DP_ENC_DUMMY_RW_1 BIT(9)
/* offset: TRANS_OFFSET (0x3400) */
#define MTK_DP_TRANS_P0_3400 0x3400
#define PATTERN1_EN_DP_TRANS_P0_MASK BIT(12)
#define PATTERN2_EN_DP_TRANS_P0_MASK BIT(13)
#define PATTERN3_EN_DP_TRANS_P0_MASK BIT(14)
#define PATTERN4_EN_DP_TRANS_P0_MASK BIT(15)
#define MTK_DP_TRANS_P0_3404 0x3404
#define DP_SCR_EN_DP_TRANS_P0_MASK BIT(0)
#define MTK_DP_TRANS_P0_340C 0x340c
#define DP_TX_TRANSMITTER_4P_RESET_SW_DP_TRANS_P0 BIT(13)
#define MTK_DP_TRANS_P0_3410 0x3410
#define HPD_DEB_THD_DP_TRANS_P0_MASK GENMASK(3, 0)
#define HPD_INT_THD_DP_TRANS_P0_MASK GENMASK(7, 4)
#define HPD_INT_THD_DP_TRANS_P0_LOWER_500US (2 << 4)
#define HPD_INT_THD_DP_TRANS_P0_UPPER_1100US (2 << 6)
#define HPD_DISC_THD_DP_TRANS_P0_MASK GENMASK(11, 8)
#define HPD_CONN_THD_DP_TRANS_P0_MASK GENMASK(15, 12)
#define MTK_DP_TRANS_P0_3414 0x3414
#define HPD_DB_DP_TRANS_P0_MASK BIT(2)
#define MTK_DP_TRANS_P0_3418 0x3418
#define IRQ_CLR_DP_TRANS_P0_MASK GENMASK(3, 0)
#define IRQ_MASK_DP_TRANS_P0_MASK GENMASK(7, 4)
#define IRQ_MASK_DP_TRANS_P0_DISC_IRQ (BIT(1) << 4)
#define IRQ_MASK_DP_TRANS_P0_CONN_IRQ (BIT(2) << 4)
#define IRQ_MASK_DP_TRANS_P0_INT_IRQ (BIT(3) << 4)
#define IRQ_STATUS_DP_TRANS_P0_MASK GENMASK(15, 12)
#define MTK_DP_TRANS_P0_342C 0x342c
#define XTAL_FREQ_DP_TRANS_P0_DEFAULT (BIT(0) | BIT(3) | BIT(5) | BIT(6))
#define XTAL_FREQ_DP_TRANS_P0_MASK GENMASK(7, 0)
#define MTK_DP_TRANS_P0_3430 0x3430
#define HPD_INT_THD_ECO_DP_TRANS_P0_MASK GENMASK(1, 0)
#define HPD_INT_THD_ECO_DP_TRANS_P0_HIGH_BOUND_EXT BIT(1)
#define MTK_DP_TRANS_P0_34A4 0x34a4
#define LANE_NUM_DP_TRANS_P0_MASK GENMASK(3, 2)
#define MTK_DP_TRANS_P0_3540 0x3540
#define FEC_EN_DP_TRANS_P0_MASK BIT(0)
#define FEC_CLOCK_EN_MODE_DP_TRANS_P0 BIT(3)
#define MTK_DP_TRANS_P0_3580 0x3580
#define POST_MISC_DATA_LANE0_OV_DP_TRANS_P0_MASK BIT(8)
#define POST_MISC_DATA_LANE1_OV_DP_TRANS_P0_MASK BIT(9)
#define POST_MISC_DATA_LANE2_OV_DP_TRANS_P0_MASK BIT(10)
#define POST_MISC_DATA_LANE3_OV_DP_TRANS_P0_MASK BIT(11)
#define MTK_DP_TRANS_P0_35C8 0x35c8
#define SW_IRQ_CLR_DP_TRANS_P0_MASK GENMASK(15, 0)
#define SW_IRQ_STATUS_DP_TRANS_P0_MASK GENMASK(15, 0)
#define MTK_DP_TRANS_P0_35D0 0x35d0
#define SW_IRQ_FINAL_STATUS_DP_TRANS_P0_MASK GENMASK(15, 0)
#define MTK_DP_TRANS_P0_35F0 0x35f0
#define DP_TRANS_DUMMY_RW_0 BIT(3)
#define DP_TRANS_DUMMY_RW_0_MASK GENMASK(3, 2)
/* offset: AUX_OFFSET (0x3600) */
#define MTK_DP_AUX_P0_360C 0x360c
#define AUX_TIMEOUT_THR_AUX_TX_P0_MASK GENMASK(12, 0)
#define AUX_TIMEOUT_THR_AUX_TX_P0_VAL 0x1595
#define MTK_DP_AUX_P0_3614 0x3614
#define AUX_RX_UI_CNT_THR_AUX_TX_P0_MASK GENMASK(6, 0)
#define AUX_RX_UI_CNT_THR_AUX_FOR_26M 13
#define MTK_DP_AUX_P0_3618 0x3618
#define AUX_RX_FIFO_FULL_AUX_TX_P0_MASK BIT(9)
#define AUX_RX_FIFO_WRITE_POINTER_AUX_TX_P0_MASK GENMASK(3, 0)
#define MTK_DP_AUX_P0_3620 0x3620
#define AUX_RD_MODE_AUX_TX_P0_MASK BIT(9)
#define AUX_RX_FIFO_READ_PULSE_TX_P0 BIT(8)
#define AUX_RX_FIFO_READ_DATA_AUX_TX_P0_MASK GENMASK(7, 0)
#define MTK_DP_AUX_P0_3624 0x3624
#define AUX_RX_REPLY_COMMAND_AUX_TX_P0_MASK GENMASK(3, 0)
#define MTK_DP_AUX_P0_3628 0x3628
#define AUX_RX_PHY_STATE_AUX_TX_P0_MASK GENMASK(9, 0)
#define AUX_RX_PHY_STATE_AUX_TX_P0_RX_IDLE BIT(0)
#define MTK_DP_AUX_P0_362C 0x362c
#define AUX_NO_LENGTH_AUX_TX_P0 BIT(0)
#define AUX_TX_AUXTX_OV_EN_AUX_TX_P0_MASK BIT(1)
#define AUX_RESERVED_RW_0_AUX_TX_P0_MASK GENMASK(15, 2)
#define MTK_DP_AUX_P0_3630 0x3630
#define AUX_TX_REQUEST_READY_AUX_TX_P0 BIT(3)
#define MTK_DP_AUX_P0_3634 0x3634
#define AUX_TX_OVER_SAMPLE_RATE_AUX_TX_P0_MASK GENMASK(15, 8)
#define AUX_TX_OVER_SAMPLE_RATE_FOR_26M 25
#define MTK_DP_AUX_P0_3640 0x3640
#define AUX_RX_AUX_RECV_COMPLETE_IRQ_AUX_TX_P0 BIT(6)
#define AUX_RX_EDID_RECV_COMPLETE_IRQ_AUX_TX_P0 BIT(5)
#define AUX_RX_MCCS_RECV_COMPLETE_IRQ_AUX_TX_P0 BIT(4)
#define AUX_RX_CMD_RECV_IRQ_AUX_TX_P0 BIT(3)
#define AUX_RX_ADDR_RECV_IRQ_AUX_TX_P0 BIT(2)
#define AUX_RX_DATA_RECV_IRQ_AUX_TX_P0 BIT(1)
#define AUX_400US_TIMEOUT_IRQ_AUX_TX_P0 BIT(0)
#define DP_AUX_P0_3640_VAL (AUX_400US_TIMEOUT_IRQ_AUX_TX_P0 | \
AUX_RX_DATA_RECV_IRQ_AUX_TX_P0 | \
AUX_RX_ADDR_RECV_IRQ_AUX_TX_P0 | \
AUX_RX_CMD_RECV_IRQ_AUX_TX_P0 | \
AUX_RX_MCCS_RECV_COMPLETE_IRQ_AUX_TX_P0 | \
AUX_RX_EDID_RECV_COMPLETE_IRQ_AUX_TX_P0 | \
AUX_RX_AUX_RECV_COMPLETE_IRQ_AUX_TX_P0)
#define MTK_DP_AUX_P0_3644 0x3644
#define MCU_REQUEST_COMMAND_AUX_TX_P0_MASK GENMASK(3, 0)
#define MTK_DP_AUX_P0_3648 0x3648
#define MCU_REQUEST_ADDRESS_LSB_AUX_TX_P0_MASK GENMASK(15, 0)
#define MTK_DP_AUX_P0_364C 0x364c
#define MCU_REQUEST_ADDRESS_MSB_AUX_TX_P0_MASK GENMASK(3, 0)
#define MTK_DP_AUX_P0_3650 0x3650
#define MCU_REQ_DATA_NUM_AUX_TX_P0_MASK GENMASK(15, 12)
#define PHY_FIFO_RST_AUX_TX_P0_MASK BIT(9)
#define MCU_ACK_TRAN_COMPLETE_AUX_TX_P0 BIT(8)
#define MTK_DP_AUX_P0_3658 0x3658
#define AUX_TX_OV_EN_AUX_TX_P0_MASK BIT(0)
#define MTK_DP_AUX_P0_3690 0x3690
#define RX_REPLY_COMPLETE_MODE_AUX_TX_P0 BIT(8)
#define MTK_DP_AUX_P0_3704 0x3704
#define AUX_TX_FIFO_WDATA_NEW_MODE_T_AUX_TX_P0_MASK BIT(1)
#define AUX_TX_FIFO_NEW_MODE_EN_AUX_TX_P0 BIT(2)
#define MTK_DP_AUX_P0_3708 0x3708
#define MTK_DP_AUX_P0_37C8 0x37c8
#define MTK_ATOP_EN_AUX_TX_P0 BIT(0)
#endif /*_MTK_DP_REG_H_*/

View File

@ -1242,10 +1242,15 @@ void msm_drv_shutdown(struct platform_device *pdev)
struct msm_drm_private *priv = platform_get_drvdata(pdev);
struct drm_device *drm = priv ? priv->dev : NULL;
if (!priv || !priv->kms)
return;
drm_atomic_helper_shutdown(drm);
/*
* Shutdown the hw if we're far enough along where things might be on.
* If we run this too early, we'll end up panicking in any variety of
* places. Since we don't register the drm device until late in
* msm_drm_init, drm_dev->registered is used as an indicator that the
* shutdown will be successful.
*/
if (drm && drm->registered)
drm_atomic_helper_shutdown(drm);
}
static struct platform_driver msm_platform_driver = {

View File

@ -8,7 +8,6 @@
#include <linux/clk.h>
#include <linux/dma-mapping.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
@ -16,10 +15,8 @@
#include <drm/drm_atomic_helper.h>
#include <drm/drm_bridge.h>
#include <drm/drm_connector.h>
#include <drm/drm_drv.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_gem_dma_helper.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_mode_config.h>
@ -45,23 +42,11 @@ static int lcdif_attach_bridge(struct lcdif_drm_private *lcdif)
{
struct drm_device *drm = lcdif->drm;
struct drm_bridge *bridge;
struct drm_panel *panel;
int ret;
ret = drm_of_find_panel_or_bridge(drm->dev->of_node, 0, 0, &panel,
&bridge);
if (ret)
return ret;
if (panel) {
bridge = devm_drm_panel_bridge_add_typed(drm->dev, panel,
DRM_MODE_CONNECTOR_DPI);
if (IS_ERR(bridge))
return PTR_ERR(bridge);
}
if (!bridge)
return -ENODEV;
bridge = devm_drm_of_get_bridge(drm->dev, drm->dev->of_node, 0, 0);
if (IS_ERR(bridge))
return PTR_ERR(bridge);
ret = drm_bridge_attach(&lcdif->encoder, bridge, NULL, 0);
if (ret)

View File

@ -8,6 +8,7 @@
#ifndef __LCDIF_DRV_H__
#define __LCDIF_DRV_H__
#include <drm/drm_bridge.h>
#include <drm/drm_crtc.h>
#include <drm/drm_device.h>
#include <drm/drm_encoder.h>

View File

@ -17,9 +17,9 @@
#include <drm/drm_bridge.h>
#include <drm/drm_crtc.h>
#include <drm/drm_encoder.h>
#include <drm/drm_framebuffer.h>
#include <drm/drm_fb_dma_helper.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_framebuffer.h>
#include <drm/drm_gem_atomic_helper.h>
#include <drm/drm_gem_dma_helper.h>
#include <drm/drm_plane.h>
@ -122,8 +122,8 @@ static void lcdif_set_mode(struct lcdif_drm_private *lcdif, u32 bus_flags)
writel(ctrl, lcdif->base + LCDC_V8_CTRL);
writel(DISP_SIZE_DELTA_Y(m->crtc_vdisplay) |
DISP_SIZE_DELTA_X(m->crtc_hdisplay),
writel(DISP_SIZE_DELTA_Y(m->vdisplay) |
DISP_SIZE_DELTA_X(m->hdisplay),
lcdif->base + LCDC_V8_DISP_SIZE);
writel(HSYN_PARA_BP_H(m->htotal - m->hsync_end) |
@ -138,8 +138,8 @@ static void lcdif_set_mode(struct lcdif_drm_private *lcdif, u32 bus_flags)
VSYN_HSYN_WIDTH_PW_H(m->hsync_end - m->hsync_start),
lcdif->base + LCDC_V8_VSYN_HSYN_WIDTH);
writel(CTRLDESCL0_1_HEIGHT(m->crtc_vdisplay) |
CTRLDESCL0_1_WIDTH(m->crtc_hdisplay),
writel(CTRLDESCL0_1_HEIGHT(m->vdisplay) |
CTRLDESCL0_1_WIDTH(m->hdisplay),
lcdif->base + LCDC_V8_CTRLDESCL0_1);
writel(CTRLDESCL0_3_PITCH(lcdif->crtc.primary->state->fb->pitches[0]),
@ -203,7 +203,7 @@ static void lcdif_crtc_mode_set_nofb(struct lcdif_drm_private *lcdif,
DRM_DEV_DEBUG_DRIVER(drm->dev, "Pixel clock: %dkHz (actual: %dkHz)\n",
m->crtc_clock,
(int)(clk_get_rate(lcdif->clk) / 1000));
DRM_DEV_DEBUG_DRIVER(drm->dev, "Connector bus_flags: 0x%08X\n",
DRM_DEV_DEBUG_DRIVER(drm->dev, "Bridge bus_flags: 0x%08X\n",
bus_flags);
DRM_DEV_DEBUG_DRIVER(drm->dev, "Mode flags: 0x%08X\n", m->flags);

View File

@ -932,6 +932,7 @@ struct nv50_msto {
struct nv50_head *head;
struct nv50_mstc *mstc;
bool disabled;
bool enabled;
};
struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder)
@ -947,57 +948,37 @@ struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder)
return msto->mstc->mstm->outp;
}
static struct drm_dp_payload *
nv50_msto_payload(struct nv50_msto *msto)
{
struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev);
struct nv50_mstc *mstc = msto->mstc;
struct nv50_mstm *mstm = mstc->mstm;
int vcpi = mstc->port->vcpi.vcpi, i;
WARN_ON(!mutex_is_locked(&mstm->mgr.payload_lock));
NV_ATOMIC(drm, "%s: vcpi %d\n", msto->encoder.name, vcpi);
for (i = 0; i < mstm->mgr.max_payloads; i++) {
struct drm_dp_payload *payload = &mstm->mgr.payloads[i];
NV_ATOMIC(drm, "%s: %d: vcpi %d start 0x%02x slots 0x%02x\n",
mstm->outp->base.base.name, i, payload->vcpi,
payload->start_slot, payload->num_slots);
}
for (i = 0; i < mstm->mgr.max_payloads; i++) {
struct drm_dp_payload *payload = &mstm->mgr.payloads[i];
if (payload->vcpi == vcpi)
return payload;
}
return NULL;
}
static void
nv50_msto_cleanup(struct nv50_msto *msto)
nv50_msto_cleanup(struct drm_atomic_state *state,
struct drm_dp_mst_topology_state *mst_state,
struct drm_dp_mst_topology_mgr *mgr,
struct nv50_msto *msto)
{
struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev);
struct nv50_mstc *mstc = msto->mstc;
struct nv50_mstm *mstm = mstc->mstm;
if (!msto->disabled)
return;
struct drm_dp_mst_atomic_payload *payload =
drm_atomic_get_mst_payload_state(mst_state, msto->mstc->port);
NV_ATOMIC(drm, "%s: msto cleanup\n", msto->encoder.name);
drm_dp_mst_deallocate_vcpi(&mstm->mgr, mstc->port);
msto->mstc = NULL;
msto->disabled = false;
if (msto->disabled) {
msto->mstc = NULL;
msto->disabled = false;
} else if (msto->enabled) {
drm_dp_add_payload_part2(mgr, state, payload);
msto->enabled = false;
}
}
static void
nv50_msto_prepare(struct nv50_msto *msto)
nv50_msto_prepare(struct drm_atomic_state *state,
struct drm_dp_mst_topology_state *mst_state,
struct drm_dp_mst_topology_mgr *mgr,
struct nv50_msto *msto)
{
struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev);
struct nv50_mstc *mstc = msto->mstc;
struct nv50_mstm *mstm = mstc->mstm;
struct drm_dp_mst_atomic_payload *payload;
struct {
struct nv50_disp_mthd_v1 base;
struct nv50_disp_sor_dp_mst_vcpi_v0 vcpi;
@ -1009,17 +990,21 @@ nv50_msto_prepare(struct nv50_msto *msto)
(0x0100 << msto->head->base.index),
};
mutex_lock(&mstm->mgr.payload_lock);
NV_ATOMIC(drm, "%s: msto prepare\n", msto->encoder.name);
if (mstc->port->vcpi.vcpi > 0) {
struct drm_dp_payload *payload = nv50_msto_payload(msto);
if (payload) {
args.vcpi.start_slot = payload->start_slot;
args.vcpi.num_slots = payload->num_slots;
args.vcpi.pbn = mstc->port->vcpi.pbn;
args.vcpi.aligned_pbn = mstc->port->vcpi.aligned_pbn;
}
payload = drm_atomic_get_mst_payload_state(mst_state, mstc->port);
// TODO: Figure out if we want to do a better job of handling VCPI allocation failures here?
if (msto->disabled) {
drm_dp_remove_payload(mgr, mst_state, payload);
} else {
if (msto->enabled)
drm_dp_add_payload_part1(mgr, mst_state, payload);
args.vcpi.start_slot = payload->vc_start_slot;
args.vcpi.num_slots = payload->time_slots;
args.vcpi.pbn = payload->pbn;
args.vcpi.aligned_pbn = payload->time_slots * mst_state->pbn_div;
}
NV_ATOMIC(drm, "%s: %s: %02x %02x %04x %04x\n",
@ -1028,7 +1013,6 @@ nv50_msto_prepare(struct nv50_msto *msto)
args.vcpi.pbn, args.vcpi.aligned_pbn);
nvif_mthd(&drm->display->disp.object, 0, &args, sizeof(args));
mutex_unlock(&mstm->mgr.payload_lock);
}
static int
@ -1038,6 +1022,7 @@ nv50_msto_atomic_check(struct drm_encoder *encoder,
{
struct drm_atomic_state *state = crtc_state->state;
struct drm_connector *connector = conn_state->connector;
struct drm_dp_mst_topology_state *mst_state;
struct nv50_mstc *mstc = nv50_mstc(connector);
struct nv50_mstm *mstm = mstc->mstm;
struct nv50_head_atom *asyh = nv50_head_atom(crtc_state);
@ -1049,7 +1034,7 @@ nv50_msto_atomic_check(struct drm_encoder *encoder,
if (ret)
return ret;
if (!crtc_state->mode_changed && !crtc_state->connectors_changed)
if (!drm_atomic_crtc_needs_modeset(crtc_state))
return 0;
/*
@ -1065,8 +1050,18 @@ nv50_msto_atomic_check(struct drm_encoder *encoder,
false);
}
slots = drm_dp_atomic_find_vcpi_slots(state, &mstm->mgr, mstc->port,
asyh->dp.pbn, 0);
mst_state = drm_atomic_get_mst_topology_state(state, &mstm->mgr);
if (IS_ERR(mst_state))
return PTR_ERR(mst_state);
if (!mst_state->pbn_div) {
struct nouveau_encoder *outp = mstc->mstm->outp;
mst_state->pbn_div = drm_dp_get_vc_payload_bw(&mstm->mgr,
outp->dp.link_bw, outp->dp.link_nr);
}
slots = drm_dp_atomic_find_time_slots(state, &mstm->mgr, mstc->port, asyh->dp.pbn);
if (slots < 0)
return slots;
@ -1098,7 +1093,6 @@ nv50_msto_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *st
struct drm_connector *connector;
struct drm_connector_list_iter conn_iter;
u8 proto;
bool r;
drm_connector_list_iter_begin(encoder->dev, &conn_iter);
drm_for_each_connector_iter(connector, &conn_iter) {
@ -1113,10 +1107,6 @@ nv50_msto_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *st
if (WARN_ON(!mstc))
return;
r = drm_dp_mst_allocate_vcpi(&mstm->mgr, mstc->port, asyh->dp.pbn, asyh->dp.tu);
if (!r)
DRM_DEBUG_KMS("Failed to allocate VCPI\n");
if (!mstm->links++)
nv50_outp_acquire(mstm->outp, false /*XXX: MST audio.*/);
@ -1129,6 +1119,7 @@ nv50_msto_atomic_enable(struct drm_encoder *encoder, struct drm_atomic_state *st
nv50_dp_bpc_to_depth(asyh->or.bpc));
msto->mstc = mstc;
msto->enabled = true;
mstm->modified = true;
}
@ -1139,8 +1130,6 @@ nv50_msto_atomic_disable(struct drm_encoder *encoder, struct drm_atomic_state *s
struct nv50_mstc *mstc = msto->mstc;
struct nv50_mstm *mstm = mstc->mstm;
drm_dp_mst_reset_vcpi_slots(&mstm->mgr, mstc->port);
mstm->outp->update(mstm->outp, msto->head->base.index, NULL, 0, 0);
mstm->modified = true;
if (!--mstm->links)
@ -1255,29 +1244,8 @@ nv50_mstc_atomic_check(struct drm_connector *connector,
{
struct nv50_mstc *mstc = nv50_mstc(connector);
struct drm_dp_mst_topology_mgr *mgr = &mstc->mstm->mgr;
struct drm_connector_state *new_conn_state =
drm_atomic_get_new_connector_state(state, connector);
struct drm_connector_state *old_conn_state =
drm_atomic_get_old_connector_state(state, connector);
struct drm_crtc_state *crtc_state;
struct drm_crtc *new_crtc = new_conn_state->crtc;
if (!old_conn_state->crtc)
return 0;
/* We only want to free VCPI if this state disables the CRTC on this
* connector
*/
if (new_crtc) {
crtc_state = drm_atomic_get_new_crtc_state(state, new_crtc);
if (!crtc_state ||
!drm_atomic_crtc_needs_modeset(crtc_state) ||
crtc_state->enable)
return 0;
}
return drm_dp_atomic_release_vcpi_slots(state, mgr, mstc->port);
return drm_dp_atomic_release_time_slots(state, mgr, mstc->port);
}
static int
@ -1381,7 +1349,9 @@ nv50_mstc_new(struct nv50_mstm *mstm, struct drm_dp_mst_port *port,
}
static void
nv50_mstm_cleanup(struct nv50_mstm *mstm)
nv50_mstm_cleanup(struct drm_atomic_state *state,
struct drm_dp_mst_topology_state *mst_state,
struct nv50_mstm *mstm)
{
struct nouveau_drm *drm = nouveau_drm(mstm->outp->base.base.dev);
struct drm_encoder *encoder;
@ -1389,14 +1359,12 @@ nv50_mstm_cleanup(struct nv50_mstm *mstm)
NV_ATOMIC(drm, "%s: mstm cleanup\n", mstm->outp->base.base.name);
drm_dp_check_act_status(&mstm->mgr);
drm_dp_update_payload_part2(&mstm->mgr);
drm_for_each_encoder(encoder, mstm->outp->base.base.dev) {
if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) {
struct nv50_msto *msto = nv50_msto(encoder);
struct nv50_mstc *mstc = msto->mstc;
if (mstc && mstc->mstm == mstm)
nv50_msto_cleanup(msto);
nv50_msto_cleanup(state, mst_state, &mstm->mgr, msto);
}
}
@ -1404,20 +1372,34 @@ nv50_mstm_cleanup(struct nv50_mstm *mstm)
}
static void
nv50_mstm_prepare(struct nv50_mstm *mstm)
nv50_mstm_prepare(struct drm_atomic_state *state,
struct drm_dp_mst_topology_state *mst_state,
struct nv50_mstm *mstm)
{
struct nouveau_drm *drm = nouveau_drm(mstm->outp->base.base.dev);
struct drm_encoder *encoder;
NV_ATOMIC(drm, "%s: mstm prepare\n", mstm->outp->base.base.name);
drm_dp_update_payload_part1(&mstm->mgr, 1);
/* Disable payloads first */
drm_for_each_encoder(encoder, mstm->outp->base.base.dev) {
if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) {
struct nv50_msto *msto = nv50_msto(encoder);
struct nv50_mstc *mstc = msto->mstc;
if (mstc && mstc->mstm == mstm)
nv50_msto_prepare(msto);
if (mstc && mstc->mstm == mstm && msto->disabled)
nv50_msto_prepare(state, mst_state, &mstm->mgr, msto);
}
}
/* Add payloads for new heads, while also updating the start slots of any unmodified (but
* active) heads that may have had their VC slots shifted left after the previous step
*/
drm_for_each_encoder(encoder, mstm->outp->base.base.dev) {
if (encoder->encoder_type == DRM_MODE_ENCODER_DPMST) {
struct nv50_msto *msto = nv50_msto(encoder);
struct nv50_mstc *mstc = msto->mstc;
if (mstc && mstc->mstm == mstm && !msto->disabled)
nv50_msto_prepare(state, mst_state, &mstm->mgr, msto);
}
}
@ -1614,9 +1596,7 @@ nv50_mstm_new(struct nouveau_encoder *outp, struct drm_dp_aux *aux, int aux_max,
mstm->mgr.cbs = &nv50_mstm;
ret = drm_dp_mst_topology_mgr_init(&mstm->mgr, dev, aux, aux_max,
max_payloads, outp->dcb->dpconf.link_nr,
drm_dp_bw_code_to_link_rate(outp->dcb->dpconf.link_bw),
conn_base_id);
max_payloads, conn_base_id);
if (ret)
return ret;
@ -1834,7 +1814,7 @@ nv50_sor_func = {
.destroy = nv50_sor_destroy,
};
static bool nv50_has_mst(struct nouveau_drm *drm)
bool nv50_has_mst(struct nouveau_drm *drm)
{
struct nvkm_bios *bios = nvxx_bios(&drm->client.device);
u32 data;
@ -2068,20 +2048,20 @@ nv50_pior_create(struct drm_connector *connector, struct dcb_output *dcbe)
static void
nv50_disp_atomic_commit_core(struct drm_atomic_state *state, u32 *interlock)
{
struct drm_dp_mst_topology_mgr *mgr;
struct drm_dp_mst_topology_state *mst_state;
struct nouveau_drm *drm = nouveau_drm(state->dev);
struct nv50_disp *disp = nv50_disp(drm->dev);
struct nv50_core *core = disp->core;
struct nv50_mstm *mstm;
struct drm_encoder *encoder;
int i;
NV_ATOMIC(drm, "commit core %08x\n", interlock[NV50_DISP_INTERLOCK_BASE]);
drm_for_each_encoder(encoder, drm->dev) {
if (encoder->encoder_type != DRM_MODE_ENCODER_DPMST) {
mstm = nouveau_encoder(encoder)->dp.mstm;
if (mstm && mstm->modified)
nv50_mstm_prepare(mstm);
}
for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) {
mstm = nv50_mstm(mgr);
if (mstm->modified)
nv50_mstm_prepare(state, mst_state, mstm);
}
core->func->ntfy_init(disp->sync, NV50_DISP_CORE_NTFY);
@ -2090,12 +2070,10 @@ nv50_disp_atomic_commit_core(struct drm_atomic_state *state, u32 *interlock)
disp->core->chan.base.device))
NV_ERROR(drm, "core notifier timeout\n");
drm_for_each_encoder(encoder, drm->dev) {
if (encoder->encoder_type != DRM_MODE_ENCODER_DPMST) {
mstm = nouveau_encoder(encoder)->dp.mstm;
if (mstm && mstm->modified)
nv50_mstm_cleanup(mstm);
}
for_each_new_mst_mgr_in_state(state, mgr, mst_state, i) {
mstm = nv50_mstm(mgr);
if (mstm->modified)
nv50_mstm_cleanup(state, mst_state, mstm);
}
}
@ -2136,6 +2114,7 @@ nv50_disp_atomic_commit_tail(struct drm_atomic_state *state)
nv50_crc_atomic_stop_reporting(state);
drm_atomic_helper_wait_for_fences(dev, state, false);
drm_atomic_helper_wait_for_dependencies(state);
drm_dp_mst_atomic_wait_for_dependencies(state);
drm_atomic_helper_update_legacy_modeset_state(dev, state);
drm_atomic_helper_calc_timestamping_constants(state);
@ -2616,6 +2595,11 @@ nv50_disp_func = {
.atomic_state_free = nv50_disp_atomic_state_free,
};
static const struct drm_mode_config_helper_funcs
nv50_disp_helper_func = {
.atomic_commit_setup = drm_dp_mst_atomic_setup_commit,
};
/******************************************************************************
* Init
*****************************************************************************/
@ -2699,6 +2683,7 @@ nv50_display_create(struct drm_device *dev)
nouveau_display(dev)->fini = nv50_display_fini;
disp->disp = &nouveau_display(dev)->disp;
dev->mode_config.funcs = &nv50_disp_func;
dev->mode_config.helper_private = &nv50_disp_helper_func;
dev->mode_config.quirk_addfb_prefer_xbgr_30bpp = true;
dev->mode_config.normalize_zpos = true;

View File

@ -106,6 +106,8 @@ void nv50_dmac_destroy(struct nv50_dmac *);
*/
struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder);
bool nv50_has_mst(struct nouveau_drm *drm);
u32 *evo_wait(struct nv50_dmac *, int nr);
void evo_kick(u32 *, struct nv50_dmac *);

View File

@ -1106,11 +1106,25 @@ nouveau_connector_best_encoder(struct drm_connector *connector)
return NULL;
}
static int
nouveau_connector_atomic_check(struct drm_connector *connector, struct drm_atomic_state *state)
{
struct nouveau_connector *nv_conn = nouveau_connector(connector);
struct drm_connector_state *conn_state =
drm_atomic_get_new_connector_state(state, connector);
if (!nv_conn->dp_encoder || !nv50_has_mst(nouveau_drm(connector->dev)))
return 0;
return drm_dp_mst_root_conn_atomic_check(conn_state, &nv_conn->dp_encoder->dp.mstm->mgr);
}
static const struct drm_connector_helper_funcs
nouveau_connector_helper_funcs = {
.get_modes = nouveau_connector_get_modes,
.mode_valid = nouveau_connector_mode_valid,
.best_encoder = nouveau_connector_best_encoder,
.atomic_check = nouveau_connector_atomic_check,
};
static const struct drm_connector_funcs
@ -1368,7 +1382,7 @@ nouveau_connector_create(struct drm_device *dev,
return ERR_PTR(-ENOMEM);
}
drm_dp_aux_init(&nv_connector->aux);
fallthrough;
break;
default:
funcs = &nouveau_connector_funcs;
break;
@ -1431,6 +1445,8 @@ nouveau_connector_create(struct drm_device *dev,
switch (type) {
case DRM_MODE_CONNECTOR_DisplayPort:
nv_connector->dp_encoder = find_encoder(&nv_connector->base, DCB_OUTPUT_DP);
fallthrough;
case DRM_MODE_CONNECTOR_eDP:
drm_dp_cec_register_connector(&nv_connector->aux, connector);
break;

View File

@ -128,6 +128,9 @@ struct nouveau_connector {
struct drm_dp_aux aux;
/* The fixed DP encoder for this connector, if there is one */
struct nouveau_encoder *dp_encoder;
int dithering_mode;
int scaling_mode;

View File

@ -211,75 +211,24 @@ static const struct attribute_group temp1_auto_point_sensor_group = {
#define N_ATTR_GROUPS 3
static const u32 nouveau_config_chip[] = {
HWMON_C_UPDATE_INTERVAL,
0
};
static const u32 nouveau_config_in[] = {
HWMON_I_INPUT | HWMON_I_MIN | HWMON_I_MAX | HWMON_I_LABEL,
0
};
static const u32 nouveau_config_temp[] = {
HWMON_T_INPUT | HWMON_T_MAX | HWMON_T_MAX_HYST |
HWMON_T_CRIT | HWMON_T_CRIT_HYST | HWMON_T_EMERGENCY |
HWMON_T_EMERGENCY_HYST,
0
};
static const u32 nouveau_config_fan[] = {
HWMON_F_INPUT,
0
};
static const u32 nouveau_config_pwm[] = {
HWMON_PWM_INPUT | HWMON_PWM_ENABLE,
0
};
static const u32 nouveau_config_power[] = {
HWMON_P_INPUT | HWMON_P_CAP_MAX | HWMON_P_CRIT,
0
};
static const struct hwmon_channel_info nouveau_chip = {
.type = hwmon_chip,
.config = nouveau_config_chip,
};
static const struct hwmon_channel_info nouveau_temp = {
.type = hwmon_temp,
.config = nouveau_config_temp,
};
static const struct hwmon_channel_info nouveau_fan = {
.type = hwmon_fan,
.config = nouveau_config_fan,
};
static const struct hwmon_channel_info nouveau_in = {
.type = hwmon_in,
.config = nouveau_config_in,
};
static const struct hwmon_channel_info nouveau_pwm = {
.type = hwmon_pwm,
.config = nouveau_config_pwm,
};
static const struct hwmon_channel_info nouveau_power = {
.type = hwmon_power,
.config = nouveau_config_power,
};
static const struct hwmon_channel_info *nouveau_info[] = {
&nouveau_chip,
&nouveau_temp,
&nouveau_fan,
&nouveau_in,
&nouveau_pwm,
&nouveau_power,
HWMON_CHANNEL_INFO(chip,
HWMON_C_UPDATE_INTERVAL),
HWMON_CHANNEL_INFO(temp,
HWMON_T_INPUT |
HWMON_T_MAX | HWMON_T_MAX_HYST |
HWMON_T_CRIT | HWMON_T_CRIT_HYST |
HWMON_T_EMERGENCY | HWMON_T_EMERGENCY_HYST),
HWMON_CHANNEL_INFO(fan,
HWMON_F_INPUT),
HWMON_CHANNEL_INFO(in,
HWMON_I_INPUT |
HWMON_I_MIN | HWMON_I_MAX |
HWMON_I_LABEL),
HWMON_CHANNEL_INFO(pwm,
HWMON_PWM_INPUT | HWMON_PWM_ENABLE),
HWMON_CHANNEL_INFO(power,
HWMON_P_INPUT | HWMON_P_CAP_MAX | HWMON_P_CRIT),
NULL
};

View File

@ -187,3 +187,32 @@ nouveau_mem_new(struct nouveau_cli *cli, u8 kind, u8 comp,
*res = &mem->base;
return 0;
}
bool
nouveau_mem_intersects(struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
u32 num_pages = PFN_UP(size);
/* Don't evict BOs outside of the requested placement range */
if (place->fpfn >= (res->start + num_pages) ||
(place->lpfn && place->lpfn <= res->start))
return false;
return true;
}
bool
nouveau_mem_compatible(struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
u32 num_pages = PFN_UP(size);
if (res->start < place->fpfn ||
(place->lpfn && (res->start + num_pages) > place->lpfn))
return false;
return true;
}

View File

@ -25,6 +25,12 @@ int nouveau_mem_new(struct nouveau_cli *, u8 kind, u8 comp,
struct ttm_resource **);
void nouveau_mem_del(struct ttm_resource_manager *man,
struct ttm_resource *);
bool nouveau_mem_intersects(struct ttm_resource *res,
const struct ttm_place *place,
size_t size);
bool nouveau_mem_compatible(struct ttm_resource *res,
const struct ttm_place *place,
size_t size);
int nouveau_mem_vram(struct ttm_resource *, bool contig, u8 page);
int nouveau_mem_host(struct ttm_resource *, struct ttm_tt *);
void nouveau_mem_fini(struct nouveau_mem *);

View File

@ -42,6 +42,24 @@ nouveau_manager_del(struct ttm_resource_manager *man,
nouveau_mem_del(man, reg);
}
static bool
nouveau_manager_intersects(struct ttm_resource_manager *man,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
return nouveau_mem_intersects(res, place, size);
}
static bool
nouveau_manager_compatible(struct ttm_resource_manager *man,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
return nouveau_mem_compatible(res, place, size);
}
static int
nouveau_vram_manager_new(struct ttm_resource_manager *man,
struct ttm_buffer_object *bo,
@ -73,6 +91,8 @@ nouveau_vram_manager_new(struct ttm_resource_manager *man,
const struct ttm_resource_manager_func nouveau_vram_manager = {
.alloc = nouveau_vram_manager_new,
.free = nouveau_manager_del,
.intersects = nouveau_manager_intersects,
.compatible = nouveau_manager_compatible,
};
static int
@ -97,6 +117,8 @@ nouveau_gart_manager_new(struct ttm_resource_manager *man,
const struct ttm_resource_manager_func nouveau_gart_manager = {
.alloc = nouveau_gart_manager_new,
.free = nouveau_manager_del,
.intersects = nouveau_manager_intersects,
.compatible = nouveau_manager_compatible,
};
static int
@ -130,6 +152,8 @@ nv04_gart_manager_new(struct ttm_resource_manager *man,
const struct ttm_resource_manager_func nv04_gart_manager = {
.alloc = nv04_gart_manager_new,
.free = nouveau_manager_del,
.intersects = nouveau_manager_intersects,
.compatible = nouveau_manager_compatible,
};
static int

View File

@ -581,7 +581,7 @@ gm20b_clk_prog(struct nvkm_clk *base)
/*
* Interim step for changing DVFS detection settings: low enough
* frequency to be safe at at DVFS coeff = 0.
* frequency to be safe at DVFS coeff = 0.
*
* 1. If voltage is increasing:
* - safe frequency target matches the lowest - old - frequency

View File

@ -165,8 +165,8 @@ config DRM_PANEL_ILITEK_IL9322
config DRM_PANEL_ILITEK_ILI9341
tristate "Ilitek ILI9341 240x320 QVGA panels"
depends on OF && SPI
depends on DRM_KMS_HELPER
depends on DRM_GEM_DMA_HELPER
select DRM_KMS_HELPER
select DRM_GEM_DMA_HELPER
depends on BACKLIGHT_CLASS_DEVICE
select DRM_MIPI_DBI
help

View File

@ -53,7 +53,7 @@ struct panel_delay {
* before the HPD signal is reliable. Ideally this is 0 but some panels,
* board designs, or bad pulldown configs can cause a glitch here.
*
* NOTE: on some old panel data this number appers to be much too big.
* NOTE: on some old panel data this number appears to be much too big.
* Presumably some old panels simply didn't have HPD hooked up and put
* the hpd_absent here because this field predates the
* hpd_absent. While that works, it's non-ideal.
@ -1877,6 +1877,7 @@ static const struct panel_delay delay_200_500_e200 = {
*/
static const struct edp_panel_entry edp_panels[] = {
EDP_PANEL_ENTRY('A', 'U', 'O', 0x1062, &delay_200_500_e50, "B120XAN01.0"),
EDP_PANEL_ENTRY('A', 'U', 'O', 0x1e9b, &delay_200_500_e50, "B133UAN02.1"),
EDP_PANEL_ENTRY('A', 'U', 'O', 0x405c, &auo_b116xak01.delay, "B116XAK01"),
EDP_PANEL_ENTRY('A', 'U', 'O', 0x615c, &delay_200_500_e50, "B116XAN06.1"),
EDP_PANEL_ENTRY('A', 'U', 'O', 0x8594, &delay_200_500_e50, "B133UAN01.0"),
@ -1888,8 +1889,10 @@ static const struct edp_panel_entry edp_panels[] = {
EDP_PANEL_ENTRY('B', 'O', 'E', 0x0a5d, &delay_200_500_e50, "NV116WHM-N45"),
EDP_PANEL_ENTRY('C', 'M', 'N', 0x114c, &innolux_n116bca_ea1.delay, "N116BCA-EA1"),
EDP_PANEL_ENTRY('C', 'M', 'N', 0x1247, &delay_200_500_e80_d50, "N120ACA-EA1"),
EDP_PANEL_ENTRY('I', 'V', 'O', 0x057d, &delay_200_500_e200, "R140NWF5 RH"),
EDP_PANEL_ENTRY('I', 'V', 'O', 0x854b, &delay_200_500_p2e100, "M133NW4J-R3"),
EDP_PANEL_ENTRY('K', 'D', 'B', 0x0624, &kingdisplay_kd116n21_30nv_a010.delay, "116N21-30NV-A010"),
EDP_PANEL_ENTRY('K', 'D', 'B', 0x1120, &delay_200_500_e80_d50, "116N29-30NK-C007"),

View File

@ -248,11 +248,15 @@ void panfrost_mmu_reset(struct panfrost_device *pfdev)
mmu_write(pfdev, MMU_INT_MASK, ~0);
}
static size_t get_pgsize(u64 addr, size_t size)
static size_t get_pgsize(u64 addr, size_t size, size_t *count)
{
if (addr & (SZ_2M - 1) || size < SZ_2M)
return SZ_4K;
size_t blk_offset = -addr % SZ_2M;
if (blk_offset || size < SZ_2M) {
*count = min_not_zero(blk_offset, size) / SZ_4K;
return SZ_4K;
}
*count = size / SZ_2M;
return SZ_2M;
}
@ -287,12 +291,16 @@ static int mmu_map_sg(struct panfrost_device *pfdev, struct panfrost_mmu *mmu,
dev_dbg(pfdev->dev, "map: as=%d, iova=%llx, paddr=%lx, len=%zx", mmu->as, iova, paddr, len);
while (len) {
size_t pgsize = get_pgsize(iova | paddr, len);
size_t pgcount, mapped = 0;
size_t pgsize = get_pgsize(iova | paddr, len, &pgcount);
ops->map(ops, iova, paddr, pgsize, prot, GFP_KERNEL);
iova += pgsize;
paddr += pgsize;
len -= pgsize;
ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot,
GFP_KERNEL, &mapped);
/* Don't get stuck if things have gone wrong */
mapped = max(mapped, pgsize);
iova += mapped;
paddr += mapped;
len -= mapped;
}
}
@ -344,15 +352,17 @@ void panfrost_mmu_unmap(struct panfrost_gem_mapping *mapping)
mapping->mmu->as, iova, len);
while (unmapped_len < len) {
size_t unmapped_page;
size_t pgsize = get_pgsize(iova, len - unmapped_len);
size_t unmapped_page, pgcount;
size_t pgsize = get_pgsize(iova, len - unmapped_len, &pgcount);
if (ops->iova_to_phys(ops, iova)) {
unmapped_page = ops->unmap(ops, iova, pgsize, NULL);
WARN_ON(unmapped_page != pgsize);
if (bo->is_heap)
pgcount = 1;
if (!bo->is_heap || ops->iova_to_phys(ops, iova)) {
unmapped_page = ops->unmap_pages(ops, iova, pgsize, pgcount, NULL);
WARN_ON(unmapped_page != pgsize * pgcount);
}
iova += pgsize;
unmapped_len += pgsize;
iova += pgsize * pgcount;
unmapped_len += pgsize * pgcount;
}
panfrost_mmu_flush_range(pfdev, mapping->mmu,

View File

@ -194,7 +194,6 @@ static int qxl_drm_resume(struct drm_device *dev, bool thaw)
qdev->ram_header->int_mask = QXL_INTERRUPT_MASK;
if (!thaw) {
qxl_reinit_memslots(qdev);
qxl_ring_init_hdr(qdev->release_ring);
}
qxl_create_monitors_object(qdev);
@ -220,6 +219,7 @@ static int qxl_pm_resume(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
struct drm_device *drm_dev = pci_get_drvdata(pdev);
struct qxl_device *qdev = to_qxl(drm_dev);
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
@ -227,6 +227,7 @@ static int qxl_pm_resume(struct device *dev)
return -EIO;
}
qxl_io_reset(qdev);
return qxl_drm_resume(drm_dev, false);
}

View File

@ -49,7 +49,7 @@ radeon-y += radeon_device.o radeon_asic.o radeon_kms.o \
rv770_smc.o cypress_dpm.o btc_dpm.o sumo_dpm.o sumo_smc.o trinity_dpm.o \
trinity_smc.o ni_dpm.o si_smc.o si_dpm.o kv_smc.o kv_dpm.o ci_smc.o \
ci_dpm.o dce6_afmt.o radeon_vm.o radeon_ucode.o radeon_ib.o \
radeon_sync.o radeon_audio.o radeon_dp_auxch.o radeon_dp_mst.o
radeon_sync.o radeon_audio.o radeon_dp_auxch.o
radeon-$(CONFIG_MMU_NOTIFIER) += radeon_mn.o

View File

@ -617,13 +617,6 @@ static u32 atombios_adjust_pll(struct drm_crtc *crtc,
}
}
if (radeon_encoder->is_mst_encoder) {
struct radeon_encoder_mst *mst_enc = radeon_encoder->enc_priv;
struct radeon_connector_atom_dig *dig_connector = mst_enc->connector->con_priv;
dp_clock = dig_connector->dp_clock;
}
/* use recommended ref_div for ss */
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
if (radeon_crtc->ss_enabled) {
@ -972,9 +965,7 @@ static bool atombios_crtc_prepare_pll(struct drm_crtc *crtc, struct drm_display_
radeon_crtc->bpc = 8;
radeon_crtc->ss_enabled = false;
if (radeon_encoder->is_mst_encoder) {
radeon_dp_mst_prepare_pll(crtc, mode);
} else if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) ||
if ((radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT | ATOM_DEVICE_DFP_SUPPORT)) ||
(radeon_encoder_get_dp_bridge_encoder_id(radeon_crtc->encoder) != ENCODER_OBJECT_ID_NONE)) {
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
struct drm_connector *connector =

View File

@ -667,15 +667,7 @@ atombios_get_encoder_mode(struct drm_encoder *encoder)
struct drm_connector *connector;
struct radeon_connector *radeon_connector;
struct radeon_connector_atom_dig *dig_connector;
struct radeon_encoder_atom_dig *dig_enc;
if (radeon_encoder_is_digital(encoder)) {
dig_enc = radeon_encoder->enc_priv;
if (dig_enc->active_mst_links)
return ATOM_ENCODER_MODE_DP_MST;
}
if (radeon_encoder->is_mst_encoder || radeon_encoder->offset)
return ATOM_ENCODER_MODE_DP_MST;
/* dp bridges are always DP */
if (radeon_encoder_get_dp_bridge_encoder_id(encoder) != ENCODER_OBJECT_ID_NONE)
return ATOM_ENCODER_MODE_DP;
@ -1723,10 +1715,6 @@ radeon_atom_encoder_dpms_dig(struct drm_encoder *encoder, int mode)
case DRM_MODE_DPMS_SUSPEND:
case DRM_MODE_DPMS_OFF:
/* don't power off encoders with active MST links */
if (dig->active_mst_links)
return;
if (ASIC_IS_DCE4(rdev)) {
if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector)
atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0);
@ -1992,53 +1980,6 @@ atombios_set_encoder_crtc_source(struct drm_encoder *encoder)
radeon_atombios_encoder_crtc_scratch_regs(encoder, radeon_crtc->crtc_id);
}
void
atombios_set_mst_encoder_crtc_source(struct drm_encoder *encoder, int fe)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc);
int index = GetIndexIntoMasterTable(COMMAND, SelectCRTC_Source);
uint8_t frev, crev;
union crtc_source_param args;
memset(&args, 0, sizeof(args));
if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))
return;
if (frev != 1 && crev != 2)
DRM_ERROR("Unknown table for MST %d, %d\n", frev, crev);
args.v2.ucCRTC = radeon_crtc->crtc_id;
args.v2.ucEncodeMode = ATOM_ENCODER_MODE_DP_MST;
switch (fe) {
case 0:
args.v2.ucEncoderID = ASIC_INT_DIG1_ENCODER_ID;
break;
case 1:
args.v2.ucEncoderID = ASIC_INT_DIG2_ENCODER_ID;
break;
case 2:
args.v2.ucEncoderID = ASIC_INT_DIG3_ENCODER_ID;
break;
case 3:
args.v2.ucEncoderID = ASIC_INT_DIG4_ENCODER_ID;
break;
case 4:
args.v2.ucEncoderID = ASIC_INT_DIG5_ENCODER_ID;
break;
case 5:
args.v2.ucEncoderID = ASIC_INT_DIG6_ENCODER_ID;
break;
case 6:
args.v2.ucEncoderID = ASIC_INT_DIG7_ENCODER_ID;
break;
}
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
}
static void
atombios_apply_encoder_quirks(struct drm_encoder *encoder,
struct drm_display_mode *mode)

View File

@ -826,8 +826,6 @@ bool radeon_get_atom_connector_info_from_object_table(struct drm_device *dev)
}
radeon_link_encoder_connector(dev);
radeon_setup_mst_connector(dev);
return true;
}

View File

@ -37,33 +37,12 @@
#include <linux/pm_runtime.h>
#include <linux/vga_switcheroo.h>
static int radeon_dp_handle_hpd(struct drm_connector *connector)
{
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
int ret;
ret = radeon_dp_mst_check_status(radeon_connector);
if (ret == -EINVAL)
return 1;
return 0;
}
void radeon_connector_hotplug(struct drm_connector *connector)
{
struct drm_device *dev = connector->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
struct radeon_connector_atom_dig *dig_connector =
radeon_connector->con_priv;
if (radeon_connector->is_mst_connector)
return;
if (dig_connector->is_mst) {
radeon_dp_handle_hpd(connector);
return;
}
}
/* bail if the connector does not have hpd pin, e.g.,
* VGA, TV, etc.
*/
@ -1664,9 +1643,6 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
struct drm_encoder *encoder = radeon_best_single_encoder(connector);
int r;
if (radeon_dig_connector->is_mst)
return connector_status_disconnected;
if (!drm_kms_helper_is_poll_worker()) {
r = pm_runtime_get_sync(connector->dev->dev);
if (r < 0) {
@ -1729,21 +1705,12 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
radeon_dig_connector->dp_sink_type = radeon_dp_getsinktype(radeon_connector);
if (radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) {
ret = connector_status_connected;
if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) {
if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT)
radeon_dp_getdpcd(radeon_connector);
r = radeon_dp_mst_probe(radeon_connector);
if (r == 1)
ret = connector_status_disconnected;
}
} else {
if (radeon_dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) {
if (radeon_dp_getdpcd(radeon_connector)) {
r = radeon_dp_mst_probe(radeon_connector);
if (r == 1)
ret = connector_status_disconnected;
else
ret = connector_status_connected;
}
if (radeon_dp_getdpcd(radeon_connector))
ret = connector_status_connected;
} else {
/* try non-aux ddc (DP to DVI/HDMI/etc. adapter) */
if (radeon_ddc_probe(radeon_connector, false))
@ -2561,25 +2528,3 @@ radeon_add_legacy_connector(struct drm_device *dev,
connector->display_info.subpixel_order = subpixel_order;
drm_connector_register(connector);
}
void radeon_setup_mst_connector(struct drm_device *dev)
{
struct radeon_device *rdev = dev->dev_private;
struct drm_connector *connector;
struct radeon_connector *radeon_connector;
if (!ASIC_IS_DCE5(rdev))
return;
if (radeon_mst == 0)
return;
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
radeon_connector = to_radeon_connector(connector);
if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort)
continue;
radeon_dp_mst_init(radeon_connector);
}
}

View File

@ -1438,7 +1438,6 @@ int radeon_device_init(struct radeon_device *rdev,
goto failed;
radeon_gem_debugfs_init(rdev);
radeon_mst_debugfs_init(rdev);
if (rdev->flags & RADEON_IS_AGP && !rdev->accel_working) {
/* Acceleration not working on AGP card try again

View File

@ -1,778 +0,0 @@
// SPDX-License-Identifier: MIT
#include <drm/display/drm_dp_mst_helper.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_file.h>
#include <drm/drm_probe_helper.h>
#include "atom.h"
#include "ni_reg.h"
#include "radeon.h"
static struct radeon_encoder *radeon_dp_create_fake_mst_encoder(struct radeon_connector *connector);
static int radeon_atom_set_enc_offset(int id)
{
static const int offsets[] = { EVERGREEN_CRTC0_REGISTER_OFFSET,
EVERGREEN_CRTC1_REGISTER_OFFSET,
EVERGREEN_CRTC2_REGISTER_OFFSET,
EVERGREEN_CRTC3_REGISTER_OFFSET,
EVERGREEN_CRTC4_REGISTER_OFFSET,
EVERGREEN_CRTC5_REGISTER_OFFSET,
0x13830 - 0x7030 };
return offsets[id];
}
static int radeon_dp_mst_set_be_cntl(struct radeon_encoder *primary,
struct radeon_encoder_mst *mst_enc,
enum radeon_hpd_id hpd, bool enable)
{
struct drm_device *dev = primary->base.dev;
struct radeon_device *rdev = dev->dev_private;
uint32_t reg;
int retries = 0;
uint32_t temp;
reg = RREG32(NI_DIG_BE_CNTL + primary->offset);
/* set MST mode */
reg &= ~NI_DIG_FE_DIG_MODE(7);
reg |= NI_DIG_FE_DIG_MODE(NI_DIG_MODE_DP_MST);
if (enable)
reg |= NI_DIG_FE_SOURCE_SELECT(1 << mst_enc->fe);
else
reg &= ~NI_DIG_FE_SOURCE_SELECT(1 << mst_enc->fe);
reg |= NI_DIG_HPD_SELECT(hpd);
DRM_DEBUG_KMS("writing 0x%08x 0x%08x\n", NI_DIG_BE_CNTL + primary->offset, reg);
WREG32(NI_DIG_BE_CNTL + primary->offset, reg);
if (enable) {
uint32_t offset = radeon_atom_set_enc_offset(mst_enc->fe);
do {
temp = RREG32(NI_DIG_FE_CNTL + offset);
} while ((temp & NI_DIG_SYMCLK_FE_ON) && retries++ < 10000);
if (retries == 10000)
DRM_ERROR("timed out waiting for FE %d %d\n", primary->offset, mst_enc->fe);
}
return 0;
}
static int radeon_dp_mst_set_stream_attrib(struct radeon_encoder *primary,
int stream_number,
int fe,
int slots)
{
struct drm_device *dev = primary->base.dev;
struct radeon_device *rdev = dev->dev_private;
u32 temp, val;
int retries = 0;
int satreg, satidx;
satreg = stream_number >> 1;
satidx = stream_number & 1;
temp = RREG32(NI_DP_MSE_SAT0 + satreg + primary->offset);
val = NI_DP_MSE_SAT_SLOT_COUNT0(slots) | NI_DP_MSE_SAT_SRC0(fe);
val <<= (16 * satidx);
temp &= ~(0xffff << (16 * satidx));
temp |= val;
DRM_DEBUG_KMS("writing 0x%08x 0x%08x\n", NI_DP_MSE_SAT0 + satreg + primary->offset, temp);
WREG32(NI_DP_MSE_SAT0 + satreg + primary->offset, temp);
WREG32(NI_DP_MSE_SAT_UPDATE + primary->offset, 1);
do {
unsigned value1, value2;
udelay(10);
temp = RREG32(NI_DP_MSE_SAT_UPDATE + primary->offset);
value1 = temp & NI_DP_MSE_SAT_UPDATE_MASK;
value2 = temp & NI_DP_MSE_16_MTP_KEEPOUT;
if (!value1 && !value2)
break;
} while (retries++ < 50);
if (retries == 10000)
DRM_ERROR("timed out waitin for SAT update %d\n", primary->offset);
/* MTP 16 ? */
return 0;
}
static int radeon_dp_mst_update_stream_attribs(struct radeon_connector *mst_conn,
struct radeon_encoder *primary)
{
struct drm_device *dev = mst_conn->base.dev;
struct stream_attribs new_attribs[6];
int i;
int idx = 0;
struct radeon_connector *radeon_connector;
struct drm_connector *connector;
memset(new_attribs, 0, sizeof(new_attribs));
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
struct radeon_encoder *subenc;
struct radeon_encoder_mst *mst_enc;
radeon_connector = to_radeon_connector(connector);
if (!radeon_connector->is_mst_connector)
continue;
if (radeon_connector->mst_port != mst_conn)
continue;
subenc = radeon_connector->mst_encoder;
mst_enc = subenc->enc_priv;
if (!mst_enc->enc_active)
continue;
new_attribs[idx].fe = mst_enc->fe;
new_attribs[idx].slots = drm_dp_mst_get_vcpi_slots(&mst_conn->mst_mgr, mst_enc->port);
idx++;
}
for (i = 0; i < idx; i++) {
if (new_attribs[i].fe != mst_conn->cur_stream_attribs[i].fe ||
new_attribs[i].slots != mst_conn->cur_stream_attribs[i].slots) {
radeon_dp_mst_set_stream_attrib(primary, i, new_attribs[i].fe, new_attribs[i].slots);
mst_conn->cur_stream_attribs[i].fe = new_attribs[i].fe;
mst_conn->cur_stream_attribs[i].slots = new_attribs[i].slots;
}
}
for (i = idx; i < mst_conn->enabled_attribs; i++) {
radeon_dp_mst_set_stream_attrib(primary, i, 0, 0);
mst_conn->cur_stream_attribs[i].fe = 0;
mst_conn->cur_stream_attribs[i].slots = 0;
}
mst_conn->enabled_attribs = idx;
return 0;
}
static int radeon_dp_mst_set_vcp_size(struct radeon_encoder *mst, s64 avg_time_slots_per_mtp)
{
struct drm_device *dev = mst->base.dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder_mst *mst_enc = mst->enc_priv;
uint32_t val, temp;
uint32_t offset = radeon_atom_set_enc_offset(mst_enc->fe);
int retries = 0;
uint32_t x = drm_fixp2int(avg_time_slots_per_mtp);
uint32_t y = drm_fixp2int_ceil((avg_time_slots_per_mtp - x) << 26);
val = NI_DP_MSE_RATE_X(x) | NI_DP_MSE_RATE_Y(y);
WREG32(NI_DP_MSE_RATE_CNTL + offset, val);
do {
temp = RREG32(NI_DP_MSE_RATE_UPDATE + offset);
udelay(10);
} while ((temp & 0x1) && (retries++ < 10000));
if (retries >= 10000)
DRM_ERROR("timed out wait for rate cntl %d\n", mst_enc->fe);
return 0;
}
static int radeon_dp_mst_get_ddc_modes(struct drm_connector *connector)
{
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct radeon_connector *master = radeon_connector->mst_port;
struct edid *edid;
int ret = 0;
edid = drm_dp_mst_get_edid(connector, &master->mst_mgr, radeon_connector->port);
radeon_connector->edid = edid;
DRM_DEBUG_KMS("edid retrieved %p\n", edid);
if (radeon_connector->edid) {
drm_connector_update_edid_property(&radeon_connector->base, radeon_connector->edid);
ret = drm_add_edid_modes(&radeon_connector->base, radeon_connector->edid);
return ret;
}
drm_connector_update_edid_property(&radeon_connector->base, NULL);
return ret;
}
static int radeon_dp_mst_get_modes(struct drm_connector *connector)
{
return radeon_dp_mst_get_ddc_modes(connector);
}
static enum drm_mode_status
radeon_dp_mst_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
/* TODO - validate mode against available PBN for link */
if (mode->clock < 10000)
return MODE_CLOCK_LOW;
if (mode->flags & DRM_MODE_FLAG_DBLCLK)
return MODE_H_ILLEGAL;
return MODE_OK;
}
static struct
drm_encoder *radeon_mst_best_encoder(struct drm_connector *connector)
{
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
return &radeon_connector->mst_encoder->base;
}
static int
radeon_dp_mst_detect(struct drm_connector *connector,
struct drm_modeset_acquire_ctx *ctx,
bool force)
{
struct radeon_connector *radeon_connector =
to_radeon_connector(connector);
struct radeon_connector *master = radeon_connector->mst_port;
if (drm_connector_is_unregistered(connector))
return connector_status_disconnected;
return drm_dp_mst_detect_port(connector, ctx, &master->mst_mgr,
radeon_connector->port);
}
static const struct drm_connector_helper_funcs radeon_dp_mst_connector_helper_funcs = {
.get_modes = radeon_dp_mst_get_modes,
.mode_valid = radeon_dp_mst_mode_valid,
.best_encoder = radeon_mst_best_encoder,
.detect_ctx = radeon_dp_mst_detect,
};
static void
radeon_dp_mst_connector_destroy(struct drm_connector *connector)
{
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
struct radeon_encoder *radeon_encoder = radeon_connector->mst_encoder;
drm_encoder_cleanup(&radeon_encoder->base);
kfree(radeon_encoder);
drm_connector_cleanup(connector);
kfree(radeon_connector);
}
static const struct drm_connector_funcs radeon_dp_mst_connector_funcs = {
.dpms = drm_helper_connector_dpms,
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = radeon_dp_mst_connector_destroy,
};
static struct drm_connector *radeon_dp_add_mst_connector(struct drm_dp_mst_topology_mgr *mgr,
struct drm_dp_mst_port *port,
const char *pathprop)
{
struct radeon_connector *master = container_of(mgr, struct radeon_connector, mst_mgr);
struct drm_device *dev = master->base.dev;
struct radeon_connector *radeon_connector;
struct drm_connector *connector;
radeon_connector = kzalloc(sizeof(*radeon_connector), GFP_KERNEL);
if (!radeon_connector)
return NULL;
radeon_connector->is_mst_connector = true;
connector = &radeon_connector->base;
radeon_connector->port = port;
radeon_connector->mst_port = master;
DRM_DEBUG_KMS("\n");
drm_connector_init(dev, connector, &radeon_dp_mst_connector_funcs, DRM_MODE_CONNECTOR_DisplayPort);
drm_connector_helper_add(connector, &radeon_dp_mst_connector_helper_funcs);
radeon_connector->mst_encoder = radeon_dp_create_fake_mst_encoder(master);
drm_object_attach_property(&connector->base, dev->mode_config.path_property, 0);
drm_object_attach_property(&connector->base, dev->mode_config.tile_property, 0);
drm_connector_set_path_property(connector, pathprop);
return connector;
}
static const struct drm_dp_mst_topology_cbs mst_cbs = {
.add_connector = radeon_dp_add_mst_connector,
};
static struct
radeon_connector *radeon_mst_find_connector(struct drm_encoder *encoder)
{
struct drm_device *dev = encoder->dev;
struct drm_connector *connector;
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
struct radeon_connector *radeon_connector = to_radeon_connector(connector);
if (!connector->encoder)
continue;
if (!radeon_connector->is_mst_connector)
continue;
DRM_DEBUG_KMS("checking %p vs %p\n", connector->encoder, encoder);
if (connector->encoder == encoder)
return radeon_connector;
}
return NULL;
}
void radeon_dp_mst_prepare_pll(struct drm_crtc *crtc, struct drm_display_mode *mode)
{
struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
struct drm_device *dev = crtc->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(radeon_crtc->encoder);
struct radeon_encoder_mst *mst_enc = radeon_encoder->enc_priv;
struct radeon_connector *radeon_connector = radeon_mst_find_connector(&radeon_encoder->base);
int dp_clock;
struct radeon_connector_atom_dig *dig_connector = mst_enc->connector->con_priv;
if (radeon_connector) {
radeon_connector->pixelclock_for_modeset = mode->clock;
if (radeon_connector->base.display_info.bpc)
radeon_crtc->bpc = radeon_connector->base.display_info.bpc;
else
radeon_crtc->bpc = 8;
}
DRM_DEBUG_KMS("dp_clock %p %d\n", dig_connector, dig_connector->dp_clock);
dp_clock = dig_connector->dp_clock;
radeon_crtc->ss_enabled =
radeon_atombios_get_asic_ss_info(rdev, &radeon_crtc->ss,
ASIC_INTERNAL_SS_ON_DP,
dp_clock);
}
static void
radeon_mst_encoder_dpms(struct drm_encoder *encoder, int mode)
{
struct drm_device *dev = encoder->dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder, *primary;
struct radeon_encoder_mst *mst_enc;
struct radeon_encoder_atom_dig *dig_enc;
struct radeon_connector *radeon_connector;
struct drm_crtc *crtc;
struct radeon_crtc *radeon_crtc;
int slots;
s64 fixed_pbn, fixed_pbn_per_slot, avg_time_slots_per_mtp;
if (!ASIC_IS_DCE5(rdev)) {
DRM_ERROR("got mst dpms on non-DCE5\n");
return;
}
radeon_connector = radeon_mst_find_connector(encoder);
if (!radeon_connector)
return;
radeon_encoder = to_radeon_encoder(encoder);
mst_enc = radeon_encoder->enc_priv;
primary = mst_enc->primary;
dig_enc = primary->enc_priv;
crtc = encoder->crtc;
DRM_DEBUG_KMS("got connector %d\n", dig_enc->active_mst_links);
switch (mode) {
case DRM_MODE_DPMS_ON:
dig_enc->active_mst_links++;
radeon_crtc = to_radeon_crtc(crtc);
if (dig_enc->active_mst_links == 1) {
mst_enc->fe = dig_enc->dig_encoder;
mst_enc->fe_from_be = true;
atombios_set_mst_encoder_crtc_source(encoder, mst_enc->fe);
atombios_dig_encoder_setup(&primary->base, ATOM_ENCODER_CMD_SETUP, 0);
atombios_dig_transmitter_setup2(&primary->base, ATOM_TRANSMITTER_ACTION_ENABLE,
0, 0, dig_enc->dig_encoder);
if (radeon_dp_needs_link_train(mst_enc->connector) ||
dig_enc->active_mst_links == 1) {
radeon_dp_link_train(&primary->base, &mst_enc->connector->base);
}
} else {
mst_enc->fe = radeon_atom_pick_dig_encoder(encoder, radeon_crtc->crtc_id);
if (mst_enc->fe == -1)
DRM_ERROR("failed to get frontend for dig encoder\n");
mst_enc->fe_from_be = false;
atombios_set_mst_encoder_crtc_source(encoder, mst_enc->fe);
}
DRM_DEBUG_KMS("dig encoder is %d %d %d\n", dig_enc->dig_encoder,
dig_enc->linkb, radeon_crtc->crtc_id);
slots = drm_dp_find_vcpi_slots(&radeon_connector->mst_port->mst_mgr,
mst_enc->pbn);
drm_dp_mst_allocate_vcpi(&radeon_connector->mst_port->mst_mgr,
radeon_connector->port,
mst_enc->pbn, slots);
drm_dp_update_payload_part1(&radeon_connector->mst_port->mst_mgr, 1);
radeon_dp_mst_set_be_cntl(primary, mst_enc,
radeon_connector->mst_port->hpd.hpd, true);
mst_enc->enc_active = true;
radeon_dp_mst_update_stream_attribs(radeon_connector->mst_port, primary);
fixed_pbn = drm_int2fixp(mst_enc->pbn);
fixed_pbn_per_slot = drm_int2fixp(radeon_connector->mst_port->mst_mgr.pbn_div);
avg_time_slots_per_mtp = drm_fixp_div(fixed_pbn, fixed_pbn_per_slot);
radeon_dp_mst_set_vcp_size(radeon_encoder, avg_time_slots_per_mtp);
atombios_dig_encoder_setup2(&primary->base, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0,
mst_enc->fe);
drm_dp_check_act_status(&radeon_connector->mst_port->mst_mgr);
drm_dp_update_payload_part2(&radeon_connector->mst_port->mst_mgr);
break;
case DRM_MODE_DPMS_STANDBY:
case DRM_MODE_DPMS_SUSPEND:
case DRM_MODE_DPMS_OFF:
DRM_ERROR("DPMS OFF %d\n", dig_enc->active_mst_links);
if (!mst_enc->enc_active)
return;
drm_dp_mst_reset_vcpi_slots(&radeon_connector->mst_port->mst_mgr, mst_enc->port);
drm_dp_update_payload_part1(&radeon_connector->mst_port->mst_mgr, 1);
drm_dp_check_act_status(&radeon_connector->mst_port->mst_mgr);
/* and this can also fail */
drm_dp_update_payload_part2(&radeon_connector->mst_port->mst_mgr);
drm_dp_mst_deallocate_vcpi(&radeon_connector->mst_port->mst_mgr, mst_enc->port);
mst_enc->enc_active = false;
radeon_dp_mst_update_stream_attribs(radeon_connector->mst_port, primary);
radeon_dp_mst_set_be_cntl(primary, mst_enc,
radeon_connector->mst_port->hpd.hpd, false);
atombios_dig_encoder_setup2(&primary->base, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0,
mst_enc->fe);
if (!mst_enc->fe_from_be)
radeon_atom_release_dig_encoder(rdev, mst_enc->fe);
mst_enc->fe_from_be = false;
dig_enc->active_mst_links--;
if (dig_enc->active_mst_links == 0) {
/* drop link */
}
break;
}
}
static bool radeon_mst_mode_fixup(struct drm_encoder *encoder,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
struct radeon_encoder_mst *mst_enc;
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct radeon_connector_atom_dig *dig_connector;
int bpp = 24;
mst_enc = radeon_encoder->enc_priv;
mst_enc->pbn = drm_dp_calc_pbn_mode(adjusted_mode->clock, bpp, false);
mst_enc->primary->active_device = mst_enc->primary->devices & mst_enc->connector->devices;
DRM_DEBUG_KMS("setting active device to %08x from %08x %08x for encoder %d\n",
mst_enc->primary->active_device, mst_enc->primary->devices,
mst_enc->connector->devices, mst_enc->primary->base.encoder_type);
drm_mode_set_crtcinfo(adjusted_mode, 0);
dig_connector = mst_enc->connector->con_priv;
dig_connector->dp_lane_count = drm_dp_max_lane_count(dig_connector->dpcd);
dig_connector->dp_clock = drm_dp_max_link_rate(dig_connector->dpcd);
DRM_DEBUG_KMS("dig clock %p %d %d\n", dig_connector,
dig_connector->dp_lane_count, dig_connector->dp_clock);
return true;
}
static void radeon_mst_encoder_prepare(struct drm_encoder *encoder)
{
struct radeon_connector *radeon_connector;
struct radeon_encoder *radeon_encoder, *primary;
struct radeon_encoder_mst *mst_enc;
struct radeon_encoder_atom_dig *dig_enc;
radeon_connector = radeon_mst_find_connector(encoder);
if (!radeon_connector) {
DRM_DEBUG_KMS("failed to find connector %p\n", encoder);
return;
}
radeon_encoder = to_radeon_encoder(encoder);
radeon_mst_encoder_dpms(encoder, DRM_MODE_DPMS_OFF);
mst_enc = radeon_encoder->enc_priv;
primary = mst_enc->primary;
dig_enc = primary->enc_priv;
mst_enc->port = radeon_connector->port;
if (dig_enc->dig_encoder == -1) {
dig_enc->dig_encoder = radeon_atom_pick_dig_encoder(&primary->base, -1);
primary->offset = radeon_atom_set_enc_offset(dig_enc->dig_encoder);
atombios_set_mst_encoder_crtc_source(encoder, dig_enc->dig_encoder);
}
DRM_DEBUG_KMS("%d %d\n", dig_enc->dig_encoder, primary->offset);
}
static void
radeon_mst_encoder_mode_set(struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
DRM_DEBUG_KMS("\n");
}
static void radeon_mst_encoder_commit(struct drm_encoder *encoder)
{
radeon_mst_encoder_dpms(encoder, DRM_MODE_DPMS_ON);
DRM_DEBUG_KMS("\n");
}
static const struct drm_encoder_helper_funcs radeon_mst_helper_funcs = {
.dpms = radeon_mst_encoder_dpms,
.mode_fixup = radeon_mst_mode_fixup,
.prepare = radeon_mst_encoder_prepare,
.mode_set = radeon_mst_encoder_mode_set,
.commit = radeon_mst_encoder_commit,
};
static void radeon_dp_mst_encoder_destroy(struct drm_encoder *encoder)
{
drm_encoder_cleanup(encoder);
kfree(encoder);
}
static const struct drm_encoder_funcs radeon_dp_mst_enc_funcs = {
.destroy = radeon_dp_mst_encoder_destroy,
};
static struct radeon_encoder *
radeon_dp_create_fake_mst_encoder(struct radeon_connector *connector)
{
struct drm_device *dev = connector->base.dev;
struct radeon_device *rdev = dev->dev_private;
struct radeon_encoder *radeon_encoder;
struct radeon_encoder_mst *mst_enc;
struct drm_encoder *encoder;
const struct drm_connector_helper_funcs *connector_funcs = connector->base.helper_private;
struct drm_encoder *enc_master = connector_funcs->best_encoder(&connector->base);
DRM_DEBUG_KMS("enc master is %p\n", enc_master);
radeon_encoder = kzalloc(sizeof(*radeon_encoder), GFP_KERNEL);
if (!radeon_encoder)
return NULL;
radeon_encoder->enc_priv = kzalloc(sizeof(*mst_enc), GFP_KERNEL);
if (!radeon_encoder->enc_priv) {
kfree(radeon_encoder);
return NULL;
}
encoder = &radeon_encoder->base;
switch (rdev->num_crtc) {
case 1:
encoder->possible_crtcs = 0x1;
break;
case 2:
default:
encoder->possible_crtcs = 0x3;
break;
case 4:
encoder->possible_crtcs = 0xf;
break;
case 6:
encoder->possible_crtcs = 0x3f;
break;
}
drm_encoder_init(dev, &radeon_encoder->base, &radeon_dp_mst_enc_funcs,
DRM_MODE_ENCODER_DPMST, NULL);
drm_encoder_helper_add(encoder, &radeon_mst_helper_funcs);
mst_enc = radeon_encoder->enc_priv;
mst_enc->connector = connector;
mst_enc->primary = to_radeon_encoder(enc_master);
radeon_encoder->is_mst_encoder = true;
return radeon_encoder;
}
int
radeon_dp_mst_init(struct radeon_connector *radeon_connector)
{
struct drm_device *dev = radeon_connector->base.dev;
int max_link_rate;
if (!radeon_connector->ddc_bus->has_aux)
return 0;
if (radeon_connector_is_dp12_capable(&radeon_connector->base))
max_link_rate = 0x14;
else
max_link_rate = 0x0a;
radeon_connector->mst_mgr.cbs = &mst_cbs;
return drm_dp_mst_topology_mgr_init(&radeon_connector->mst_mgr, dev,
&radeon_connector->ddc_bus->aux, 16, 6,
4, drm_dp_bw_code_to_link_rate(max_link_rate),
radeon_connector->base.base.id);
}
int
radeon_dp_mst_probe(struct radeon_connector *radeon_connector)
{
struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv;
struct drm_device *dev = radeon_connector->base.dev;
struct radeon_device *rdev = dev->dev_private;
int ret;
u8 msg[1];
if (!radeon_mst)
return 0;
if (!ASIC_IS_DCE5(rdev))
return 0;
if (dig_connector->dpcd[DP_DPCD_REV] < 0x12)
return 0;
ret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux, DP_MSTM_CAP, msg,
1);
if (ret) {
if (msg[0] & DP_MST_CAP) {
DRM_DEBUG_KMS("Sink is MST capable\n");
dig_connector->is_mst = true;
} else {
DRM_DEBUG_KMS("Sink is not MST capable\n");
dig_connector->is_mst = false;
}
}
drm_dp_mst_topology_mgr_set_mst(&radeon_connector->mst_mgr,
dig_connector->is_mst);
return dig_connector->is_mst;
}
int
radeon_dp_mst_check_status(struct radeon_connector *radeon_connector)
{
struct radeon_connector_atom_dig *dig_connector = radeon_connector->con_priv;
int retry;
if (dig_connector->is_mst) {
u8 esi[16] = { 0 };
int dret;
int ret = 0;
bool handled;
dret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux,
DP_SINK_COUNT_ESI, esi, 8);
go_again:
if (dret == 8) {
DRM_DEBUG_KMS("got esi %3ph\n", esi);
ret = drm_dp_mst_hpd_irq(&radeon_connector->mst_mgr, esi, &handled);
if (handled) {
for (retry = 0; retry < 3; retry++) {
int wret;
wret = drm_dp_dpcd_write(&radeon_connector->ddc_bus->aux,
DP_SINK_COUNT_ESI + 1, &esi[1], 3);
if (wret == 3)
break;
}
dret = drm_dp_dpcd_read(&radeon_connector->ddc_bus->aux,
DP_SINK_COUNT_ESI, esi, 8);
if (dret == 8) {
DRM_DEBUG_KMS("got esi2 %3ph\n", esi);
goto go_again;
}
} else
ret = 0;
return ret;
} else {
DRM_DEBUG_KMS("failed to get ESI - device may have failed %d\n", ret);
dig_connector->is_mst = false;
drm_dp_mst_topology_mgr_set_mst(&radeon_connector->mst_mgr,
dig_connector->is_mst);
/* send a hotplug event */
}
}
return -EINVAL;
}
#if defined(CONFIG_DEBUG_FS)
static int radeon_debugfs_mst_info_show(struct seq_file *m, void *unused)
{
struct radeon_device *rdev = (struct radeon_device *)m->private;
struct drm_device *dev = rdev->ddev;
struct drm_connector *connector;
struct radeon_connector *radeon_connector;
struct radeon_connector_atom_dig *dig_connector;
int i;
drm_modeset_lock_all(dev);
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
if (connector->connector_type != DRM_MODE_CONNECTOR_DisplayPort)
continue;
radeon_connector = to_radeon_connector(connector);
dig_connector = radeon_connector->con_priv;
if (radeon_connector->is_mst_connector)
continue;
if (!dig_connector->is_mst)
continue;
drm_dp_mst_dump_topology(m, &radeon_connector->mst_mgr);
for (i = 0; i < radeon_connector->enabled_attribs; i++)
seq_printf(m, "attrib %d: %d %d\n", i,
radeon_connector->cur_stream_attribs[i].fe,
radeon_connector->cur_stream_attribs[i].slots);
}
drm_modeset_unlock_all(dev);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(radeon_debugfs_mst_info);
#endif
void radeon_mst_debugfs_init(struct radeon_device *rdev)
{
#if defined(CONFIG_DEBUG_FS)
struct dentry *root = rdev->ddev->primary->debugfs_root;
debugfs_create_file("radeon_mst_info", 0444, root, rdev,
&radeon_debugfs_mst_info_fops);
#endif
}

View File

@ -172,7 +172,6 @@ int radeon_use_pflipirq = 2;
int radeon_bapm = -1;
int radeon_backlight = -1;
int radeon_auxch = -1;
int radeon_mst = 0;
int radeon_uvd = 1;
int radeon_vce = 1;
@ -263,9 +262,6 @@ module_param_named(backlight, radeon_backlight, int, 0444);
MODULE_PARM_DESC(auxch, "Use native auxch experimental support (1 = enable, 0 = disable, -1 = auto)");
module_param_named(auxch, radeon_auxch, int, 0444);
MODULE_PARM_DESC(mst, "DisplayPort MST experimental support (1 = enable, 0 = disable)");
module_param_named(mst, radeon_mst, int, 0444);
MODULE_PARM_DESC(uvd, "uvd enable/disable uvd support (1 = enable, 0 = disable)");
module_param_named(uvd, radeon_uvd, int, 0444);

View File

@ -244,16 +244,7 @@ radeon_get_connector_for_encoder(struct drm_encoder *encoder)
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
radeon_connector = to_radeon_connector(connector);
if (radeon_encoder->is_mst_encoder) {
struct radeon_encoder_mst *mst_enc;
if (!radeon_connector->is_mst_connector)
continue;
mst_enc = radeon_encoder->enc_priv;
if (mst_enc->connector == radeon_connector->mst_port)
return connector;
} else if (radeon_encoder->active_device & radeon_connector->devices)
if (radeon_encoder->active_device & radeon_connector->devices)
return connector;
}
return NULL;
@ -399,9 +390,6 @@ bool radeon_dig_monitor_is_duallink(struct drm_encoder *encoder,
case DRM_MODE_CONNECTOR_DVID:
case DRM_MODE_CONNECTOR_HDMIA:
case DRM_MODE_CONNECTOR_DisplayPort:
if (radeon_connector->is_mst_connector)
return false;
dig_connector = radeon_connector->con_priv;
if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) ||
(dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP))

View File

@ -100,16 +100,8 @@ static void radeon_hotplug_work_func(struct work_struct *work)
static void radeon_dp_work_func(struct work_struct *work)
{
struct radeon_device *rdev = container_of(work, struct radeon_device,
dp_work);
struct drm_device *dev = rdev->ddev;
struct drm_mode_config *mode_config = &dev->mode_config;
struct drm_connector *connector;
/* this should take a mutex */
list_for_each_entry(connector, &mode_config->connector_list, head)
radeon_connector_hotplug(connector);
}
/**
* radeon_driver_irq_preinstall_kms - drm irq preinstall callback
*

View File

@ -31,7 +31,6 @@
#define RADEON_MODE_H
#include <drm/display/drm_dp_helper.h>
#include <drm/display/drm_dp_mst_helper.h>
#include <drm/drm_crtc.h>
#include <drm/drm_edid.h>
#include <drm/drm_encoder.h>
@ -436,24 +435,12 @@ struct radeon_encoder_atom_dig {
int panel_mode;
struct radeon_afmt *afmt;
struct r600_audio_pin *pin;
int active_mst_links;
};
struct radeon_encoder_atom_dac {
enum radeon_tv_std tv_std;
};
struct radeon_encoder_mst {
int crtc;
struct radeon_encoder *primary;
struct radeon_connector *connector;
struct drm_dp_mst_port *port;
int pbn;
int fe;
bool fe_from_be;
bool enc_active;
};
struct radeon_encoder {
struct drm_encoder base;
uint32_t encoder_enum;
@ -475,8 +462,6 @@ struct radeon_encoder {
enum radeon_output_csc output_csc;
bool can_mst;
uint32_t offset;
bool is_mst_encoder;
/* front end for this mst encoder */
};
struct radeon_connector_atom_dig {
@ -487,7 +472,6 @@ struct radeon_connector_atom_dig {
int dp_clock;
int dp_lane_count;
bool edp_on;
bool is_mst;
};
struct radeon_gpio_rec {
@ -531,11 +515,6 @@ enum radeon_connector_dither {
RADEON_FMT_DITHER_ENABLE = 1,
};
struct stream_attribs {
uint16_t fe;
uint16_t slots;
};
struct radeon_connector {
struct drm_connector base;
uint32_t connector_id;
@ -558,14 +537,6 @@ struct radeon_connector {
enum radeon_connector_audio audio;
enum radeon_connector_dither dither;
int pixelclock_for_modeset;
bool is_mst_connector;
struct radeon_connector *mst_port;
struct drm_dp_mst_port *port;
struct drm_dp_mst_topology_mgr mst_mgr;
struct radeon_encoder *mst_encoder;
struct stream_attribs cur_stream_attribs[6];
int enabled_attribs;
};
#define ENCODER_MODE_IS_DP(em) (((em) == ATOM_ENCODER_MODE_DP) || \
@ -767,8 +738,6 @@ extern void atombios_dig_transmitter_setup(struct drm_encoder *encoder,
extern void atombios_dig_transmitter_setup2(struct drm_encoder *encoder,
int action, uint8_t lane_num,
uint8_t lane_set, int fe);
extern void atombios_set_mst_encoder_crtc_source(struct drm_encoder *encoder,
int fe);
extern void radeon_atom_ext_encoder_setup_ddc(struct drm_encoder *encoder);
extern struct drm_encoder *radeon_get_external_encoder(struct drm_encoder *encoder);
void radeon_atom_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le);
@ -986,15 +955,6 @@ void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id);
int radeon_align_pitch(struct radeon_device *rdev, int width, int bpp, bool tiled);
/* mst */
int radeon_dp_mst_init(struct radeon_connector *radeon_connector);
int radeon_dp_mst_probe(struct radeon_connector *radeon_connector);
int radeon_dp_mst_check_status(struct radeon_connector *radeon_connector);
void radeon_mst_debugfs_init(struct radeon_device *rdev);
void radeon_dp_mst_prepare_pll(struct drm_crtc *crtc, struct drm_display_mode *mode);
void radeon_setup_mst_connector(struct drm_device *dev);
int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx);
void radeon_atom_release_dig_encoder(struct radeon_device *rdev, int enc_idx);
#endif

View File

@ -198,7 +198,7 @@ static void drm_sched_job_done_cb(struct dma_fence *f, struct dma_fence_cb *cb)
}
/**
* drm_sched_dependency_optimized
* drm_sched_dependency_optimized - test if the dependency can be optimized
*
* @fence: the dependency fence
* @entity: the entity which depends on the above fence
@ -993,6 +993,7 @@ static int drm_sched_main(void *param)
* used
* @score: optional score atomic shared with other schedulers
* @name: name used for debugging
* @dev: target &struct device
*
* Return 0 on success, otherwise error code.
*/

View File

@ -18,6 +18,7 @@
#include <linux/pwm.h>
#include <linux/regulator/consumer.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_damage_helper.h>
#include <drm/drm_edid.h>
@ -564,94 +565,32 @@ static int ssd130x_fb_blit_rect(struct drm_framebuffer *fb, const struct iosys_m
return ret;
}
static int ssd130x_display_pipe_mode_valid(struct drm_simple_display_pipe *pipe,
const struct drm_display_mode *mode)
static int ssd130x_primary_plane_helper_atomic_check(struct drm_plane *plane,
struct drm_atomic_state *new_state)
{
struct ssd130x_device *ssd130x = drm_to_ssd130x(pipe->crtc.dev);
struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(new_state, plane);
struct drm_crtc *new_crtc = new_plane_state->crtc;
struct drm_crtc_state *new_crtc_state = NULL;
if (mode->hdisplay != ssd130x->mode.hdisplay &&
mode->vdisplay != ssd130x->mode.vdisplay)
return MODE_ONE_SIZE;
if (new_crtc)
new_crtc_state = drm_atomic_get_new_crtc_state(new_state, new_crtc);
if (mode->hdisplay != ssd130x->mode.hdisplay)
return MODE_ONE_WIDTH;
if (mode->vdisplay != ssd130x->mode.vdisplay)
return MODE_ONE_HEIGHT;
return MODE_OK;
return drm_atomic_helper_check_plane_state(new_plane_state, new_crtc_state,
DRM_PLANE_NO_SCALING,
DRM_PLANE_NO_SCALING,
false, false);
}
static void ssd130x_display_pipe_enable(struct drm_simple_display_pipe *pipe,
struct drm_crtc_state *crtc_state,
struct drm_plane_state *plane_state)
static void ssd130x_primary_plane_helper_atomic_update(struct drm_plane *plane,
struct drm_atomic_state *old_state)
{
struct ssd130x_device *ssd130x = drm_to_ssd130x(pipe->crtc.dev);
struct drm_plane_state *plane_state = plane->state;
struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(old_state, plane);
struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state);
struct drm_device *drm = &ssd130x->drm;
int idx, ret;
ret = ssd130x_power_on(ssd130x);
if (ret)
return;
ret = ssd130x_init(ssd130x);
if (ret)
goto out_power_off;
if (!drm_dev_enter(drm, &idx))
goto out_power_off;
ssd130x_fb_blit_rect(plane_state->fb, &shadow_plane_state->data[0], &plane_state->dst);
ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON);
backlight_enable(ssd130x->bl_dev);
drm_dev_exit(idx);
return;
out_power_off:
ssd130x_power_off(ssd130x);
}
static void ssd130x_display_pipe_disable(struct drm_simple_display_pipe *pipe)
{
struct ssd130x_device *ssd130x = drm_to_ssd130x(pipe->crtc.dev);
struct drm_device *drm = &ssd130x->drm;
int idx;
if (!drm_dev_enter(drm, &idx))
return;
ssd130x_clear_screen(ssd130x);
backlight_disable(ssd130x->bl_dev);
ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_OFF);
ssd130x_power_off(ssd130x);
drm_dev_exit(idx);
}
static void ssd130x_display_pipe_update(struct drm_simple_display_pipe *pipe,
struct drm_plane_state *old_plane_state)
{
struct ssd130x_device *ssd130x = drm_to_ssd130x(pipe->crtc.dev);
struct drm_plane_state *plane_state = pipe->plane.state;
struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state);
struct drm_framebuffer *fb = plane_state->fb;
struct drm_device *drm = &ssd130x->drm;
struct drm_device *drm = plane->dev;
struct drm_rect src_clip, dst_clip;
int idx;
if (!fb)
return;
if (!pipe->crtc.state->active)
return;
if (!drm_atomic_helper_damage_merged(old_plane_state, plane_state, &src_clip))
return;
@ -667,15 +606,132 @@ static void ssd130x_display_pipe_update(struct drm_simple_display_pipe *pipe,
drm_dev_exit(idx);
}
static const struct drm_simple_display_pipe_funcs ssd130x_pipe_funcs = {
.mode_valid = ssd130x_display_pipe_mode_valid,
.enable = ssd130x_display_pipe_enable,
.disable = ssd130x_display_pipe_disable,
.update = ssd130x_display_pipe_update,
DRM_GEM_SIMPLE_DISPLAY_PIPE_SHADOW_PLANE_FUNCS,
static void ssd130x_primary_plane_helper_atomic_disable(struct drm_plane *plane,
struct drm_atomic_state *old_state)
{
struct drm_device *drm = plane->dev;
struct ssd130x_device *ssd130x = drm_to_ssd130x(drm);
int idx;
if (!drm_dev_enter(drm, &idx))
return;
ssd130x_clear_screen(ssd130x);
drm_dev_exit(idx);
}
static const struct drm_plane_helper_funcs ssd130x_primary_plane_helper_funcs = {
DRM_GEM_SHADOW_PLANE_HELPER_FUNCS,
.atomic_check = ssd130x_primary_plane_helper_atomic_check,
.atomic_update = ssd130x_primary_plane_helper_atomic_update,
.atomic_disable = ssd130x_primary_plane_helper_atomic_disable,
};
static int ssd130x_connector_get_modes(struct drm_connector *connector)
static const struct drm_plane_funcs ssd130x_primary_plane_funcs = {
.update_plane = drm_atomic_helper_update_plane,
.disable_plane = drm_atomic_helper_disable_plane,
.destroy = drm_plane_cleanup,
DRM_GEM_SHADOW_PLANE_FUNCS,
};
static enum drm_mode_status ssd130x_crtc_helper_mode_valid(struct drm_crtc *crtc,
const struct drm_display_mode *mode)
{
struct ssd130x_device *ssd130x = drm_to_ssd130x(crtc->dev);
if (mode->hdisplay != ssd130x->mode.hdisplay &&
mode->vdisplay != ssd130x->mode.vdisplay)
return MODE_ONE_SIZE;
else if (mode->hdisplay != ssd130x->mode.hdisplay)
return MODE_ONE_WIDTH;
else if (mode->vdisplay != ssd130x->mode.vdisplay)
return MODE_ONE_HEIGHT;
return MODE_OK;
}
static int ssd130x_crtc_helper_atomic_check(struct drm_crtc *crtc,
struct drm_atomic_state *new_state)
{
struct drm_crtc_state *new_crtc_state = drm_atomic_get_new_crtc_state(new_state, crtc);
int ret;
ret = drm_atomic_helper_check_crtc_state(new_crtc_state, false);
if (ret)
return ret;
return drm_atomic_add_affected_planes(new_state, crtc);
}
/*
* The CRTC is always enabled. Screen updates are performed by
* the primary plane's atomic_update function. Disabling clears
* the screen in the primary plane's atomic_disable function.
*/
static const struct drm_crtc_helper_funcs ssd130x_crtc_helper_funcs = {
.mode_valid = ssd130x_crtc_helper_mode_valid,
.atomic_check = ssd130x_crtc_helper_atomic_check,
};
static void ssd130x_crtc_reset(struct drm_crtc *crtc)
{
struct drm_device *drm = crtc->dev;
struct ssd130x_device *ssd130x = drm_to_ssd130x(drm);
ssd130x_init(ssd130x);
drm_atomic_helper_crtc_reset(crtc);
}
static const struct drm_crtc_funcs ssd130x_crtc_funcs = {
.reset = ssd130x_crtc_reset,
.destroy = drm_crtc_cleanup,
.set_config = drm_atomic_helper_set_config,
.page_flip = drm_atomic_helper_page_flip,
.atomic_duplicate_state = drm_atomic_helper_crtc_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_crtc_destroy_state,
};
static void ssd130x_encoder_helper_atomic_enable(struct drm_encoder *encoder,
struct drm_atomic_state *state)
{
struct drm_device *drm = encoder->dev;
struct ssd130x_device *ssd130x = drm_to_ssd130x(drm);
int ret;
ret = ssd130x_power_on(ssd130x);
if (ret)
return;
ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON);
backlight_enable(ssd130x->bl_dev);
}
static void ssd130x_encoder_helper_atomic_disable(struct drm_encoder *encoder,
struct drm_atomic_state *state)
{
struct drm_device *drm = encoder->dev;
struct ssd130x_device *ssd130x = drm_to_ssd130x(drm);
backlight_disable(ssd130x->bl_dev);
ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_OFF);
ssd130x_power_off(ssd130x);
}
static const struct drm_encoder_helper_funcs ssd130x_encoder_helper_funcs = {
.atomic_enable = ssd130x_encoder_helper_atomic_enable,
.atomic_disable = ssd130x_encoder_helper_atomic_disable,
};
static const struct drm_encoder_funcs ssd130x_encoder_funcs = {
.destroy = drm_encoder_cleanup,
};
static int ssd130x_connector_helper_get_modes(struct drm_connector *connector)
{
struct ssd130x_device *ssd130x = drm_to_ssd130x(connector->dev);
struct drm_display_mode *mode;
@ -695,7 +751,7 @@ static int ssd130x_connector_get_modes(struct drm_connector *connector)
}
static const struct drm_connector_helper_funcs ssd130x_connector_helper_funcs = {
.get_modes = ssd130x_connector_get_modes,
.get_modes = ssd130x_connector_helper_get_modes,
};
static const struct drm_connector_funcs ssd130x_connector_funcs = {
@ -806,8 +862,16 @@ static int ssd130x_init_modeset(struct ssd130x_device *ssd130x)
struct device *dev = ssd130x->dev;
struct drm_device *drm = &ssd130x->drm;
unsigned long max_width, max_height;
struct drm_plane *primary_plane;
struct drm_crtc *crtc;
struct drm_encoder *encoder;
struct drm_connector *connector;
int ret;
/*
* Modesetting
*/
ret = drmm_mode_config_init(drm);
if (ret) {
dev_err(dev, "DRM mode config init failed: %d\n", ret);
@ -833,25 +897,65 @@ static int ssd130x_init_modeset(struct ssd130x_device *ssd130x)
drm->mode_config.preferred_depth = 32;
drm->mode_config.funcs = &ssd130x_mode_config_funcs;
ret = drm_connector_init(drm, &ssd130x->connector, &ssd130x_connector_funcs,
/* Primary plane */
primary_plane = &ssd130x->primary_plane;
ret = drm_universal_plane_init(drm, primary_plane, 0, &ssd130x_primary_plane_funcs,
ssd130x_formats, ARRAY_SIZE(ssd130x_formats),
NULL, DRM_PLANE_TYPE_PRIMARY, NULL);
if (ret) {
dev_err(dev, "DRM primary plane init failed: %d\n", ret);
return ret;
}
drm_plane_helper_add(primary_plane, &ssd130x_primary_plane_helper_funcs);
drm_plane_enable_fb_damage_clips(primary_plane);
/* CRTC */
crtc = &ssd130x->crtc;
ret = drm_crtc_init_with_planes(drm, crtc, primary_plane, NULL,
&ssd130x_crtc_funcs, NULL);
if (ret) {
dev_err(dev, "DRM crtc init failed: %d\n", ret);
return ret;
}
drm_crtc_helper_add(crtc, &ssd130x_crtc_helper_funcs);
/* Encoder */
encoder = &ssd130x->encoder;
ret = drm_encoder_init(drm, encoder, &ssd130x_encoder_funcs,
DRM_MODE_ENCODER_NONE, NULL);
if (ret) {
dev_err(dev, "DRM encoder init failed: %d\n", ret);
return ret;
}
drm_encoder_helper_add(encoder, &ssd130x_encoder_helper_funcs);
encoder->possible_crtcs = drm_crtc_mask(crtc);
/* Connector */
connector = &ssd130x->connector;
ret = drm_connector_init(drm, connector, &ssd130x_connector_funcs,
DRM_MODE_CONNECTOR_Unknown);
if (ret) {
dev_err(dev, "DRM connector init failed: %d\n", ret);
return ret;
}
drm_connector_helper_add(&ssd130x->connector, &ssd130x_connector_helper_funcs);
drm_connector_helper_add(connector, &ssd130x_connector_helper_funcs);
ret = drm_simple_display_pipe_init(drm, &ssd130x->pipe, &ssd130x_pipe_funcs,
ssd130x_formats, ARRAY_SIZE(ssd130x_formats),
NULL, &ssd130x->connector);
ret = drm_connector_attach_encoder(connector, encoder);
if (ret) {
dev_err(dev, "DRM simple display pipeline init failed: %d\n", ret);
dev_err(dev, "DRM attach connector to encoder failed: %d\n", ret);
return ret;
}
drm_plane_enable_fb_damage_clips(&ssd130x->pipe.plane);
drm_mode_config_reset(drm);
return 0;

View File

@ -13,8 +13,11 @@
#ifndef __SSD1307X_H__
#define __SSD1307X_H__
#include <drm/drm_connector.h>
#include <drm/drm_crtc.h>
#include <drm/drm_drv.h>
#include <drm/drm_simple_kms_helper.h>
#include <drm/drm_encoder.h>
#include <drm/drm_plane_helper.h>
#include <linux/regmap.h>
@ -42,8 +45,10 @@ struct ssd130x_deviceinfo {
struct ssd130x_device {
struct drm_device drm;
struct device *dev;
struct drm_simple_display_pipe pipe;
struct drm_display_mode mode;
struct drm_plane primary_plane;
struct drm_crtc crtc;
struct drm_encoder encoder;
struct drm_connector connector;
struct i2c_client *client;

View File

@ -14,6 +14,7 @@
#include <linux/regmap.h>
#include <linux/reset.h>
#include <drm/drm_atomic.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_of.h>
#include <drm/drm_panel.h>
@ -275,13 +276,6 @@ drm_encoder_to_sun4i_tv(struct drm_encoder *encoder)
encoder);
}
static inline struct sun4i_tv *
drm_connector_to_sun4i_tv(struct drm_connector *connector)
{
return container_of(connector, struct sun4i_tv,
connector);
}
/*
* FIXME: If only the drm_display_mode private field was usable, this
* could go away...
@ -339,7 +333,8 @@ static void sun4i_tv_mode_to_drm_mode(const struct tv_mode *tv_mode,
mode->vtotal = mode->vsync_end + tv_mode->vback_porch;
}
static void sun4i_tv_disable(struct drm_encoder *encoder)
static void sun4i_tv_disable(struct drm_encoder *encoder,
struct drm_atomic_state *state)
{
struct sun4i_tv *tv = drm_encoder_to_sun4i_tv(encoder);
struct sun4i_crtc *crtc = drm_crtc_to_sun4i_crtc(encoder->crtc);
@ -353,27 +348,18 @@ static void sun4i_tv_disable(struct drm_encoder *encoder)
sunxi_engine_disable_color_correction(crtc->engine);
}
static void sun4i_tv_enable(struct drm_encoder *encoder)
static void sun4i_tv_enable(struct drm_encoder *encoder,
struct drm_atomic_state *state)
{
struct sun4i_tv *tv = drm_encoder_to_sun4i_tv(encoder);
struct sun4i_crtc *crtc = drm_crtc_to_sun4i_crtc(encoder->crtc);
struct drm_crtc_state *crtc_state =
drm_atomic_get_new_crtc_state(state, encoder->crtc);
struct drm_display_mode *mode = &crtc_state->mode;
const struct tv_mode *tv_mode = sun4i_tv_find_tv_by_mode(mode);
DRM_DEBUG_DRIVER("Enabling the TV Output\n");
sunxi_engine_apply_color_correction(crtc->engine);
regmap_update_bits(tv->regs, SUN4I_TVE_EN_REG,
SUN4I_TVE_EN_ENABLE,
SUN4I_TVE_EN_ENABLE);
}
static void sun4i_tv_mode_set(struct drm_encoder *encoder,
struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
struct sun4i_tv *tv = drm_encoder_to_sun4i_tv(encoder);
const struct tv_mode *tv_mode = sun4i_tv_find_tv_by_mode(mode);
/* Enable and map the DAC to the output */
regmap_update_bits(tv->regs, SUN4I_TVE_EN_REG,
SUN4I_TVE_EN_DAC_MAP_MASK,
@ -466,12 +452,17 @@ static void sun4i_tv_mode_set(struct drm_encoder *encoder,
SUN4I_TVE_RESYNC_FIELD : 0));
regmap_write(tv->regs, SUN4I_TVE_SLAVE_REG, 0);
sunxi_engine_apply_color_correction(crtc->engine);
regmap_update_bits(tv->regs, SUN4I_TVE_EN_REG,
SUN4I_TVE_EN_ENABLE,
SUN4I_TVE_EN_ENABLE);
}
static const struct drm_encoder_helper_funcs sun4i_tv_helper_funcs = {
.disable = sun4i_tv_disable,
.enable = sun4i_tv_enable,
.mode_set = sun4i_tv_mode_set,
.atomic_disable = sun4i_tv_disable,
.atomic_enable = sun4i_tv_enable,
};
static int sun4i_tv_comp_get_modes(struct drm_connector *connector)
@ -497,27 +488,13 @@ static int sun4i_tv_comp_get_modes(struct drm_connector *connector)
return i;
}
static int sun4i_tv_comp_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
/* TODO */
return MODE_OK;
}
static const struct drm_connector_helper_funcs sun4i_tv_comp_connector_helper_funcs = {
.get_modes = sun4i_tv_comp_get_modes,
.mode_valid = sun4i_tv_comp_mode_valid,
};
static void
sun4i_tv_comp_connector_destroy(struct drm_connector *connector)
{
drm_connector_cleanup(connector);
}
static const struct drm_connector_funcs sun4i_tv_comp_connector_funcs = {
.fill_modes = drm_helper_probe_single_connector_modes,
.destroy = sun4i_tv_comp_connector_destroy,
.destroy = drm_connector_cleanup,
.reset = drm_atomic_helper_connector_reset,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
@ -604,7 +581,7 @@ static int sun4i_tv_bind(struct device *dev, struct device *master,
if (ret) {
dev_err(dev,
"Couldn't initialise the Composite connector\n");
goto err_cleanup_connector;
goto err_cleanup_encoder;
}
tv->connector.interlace_allowed = true;
@ -612,7 +589,7 @@ static int sun4i_tv_bind(struct device *dev, struct device *master,
return 0;
err_cleanup_connector:
err_cleanup_encoder:
drm_encoder_cleanup(&tv->encoder);
err_disable_clk:
clk_disable_unprepare(tv->clk);
@ -629,6 +606,7 @@ static void sun4i_tv_unbind(struct device *dev, struct device *master,
drm_connector_cleanup(&tv->connector);
drm_encoder_cleanup(&tv->encoder);
clk_disable_unprepare(tv->clk);
reset_control_assert(tv->reset);
}
static const struct component_ops sun4i_tv_ops = {

View File

@ -16,7 +16,7 @@ static void drm_cmdline_test_force_e_only(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "e";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_FALSE(test, mode.specified);
KUNIT_EXPECT_FALSE(test, mode.refresh_specified);
@ -34,7 +34,7 @@ static void drm_cmdline_test_force_D_only_not_digital(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "D";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_FALSE(test, mode.specified);
KUNIT_EXPECT_FALSE(test, mode.refresh_specified);
@ -56,7 +56,7 @@ static void drm_cmdline_test_force_D_only_hdmi(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "D";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&connector_hdmi, &mode));
KUNIT_EXPECT_FALSE(test, mode.specified);
KUNIT_EXPECT_FALSE(test, mode.refresh_specified);
@ -78,7 +78,7 @@ static void drm_cmdline_test_force_D_only_dvi(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "D";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&connector_dvi, &mode));
KUNIT_EXPECT_FALSE(test, mode.specified);
KUNIT_EXPECT_FALSE(test, mode.refresh_specified);
@ -96,7 +96,7 @@ static void drm_cmdline_test_force_d_only(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "d";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_FALSE(test, mode.specified);
KUNIT_EXPECT_FALSE(test, mode.refresh_specified);
@ -109,30 +109,12 @@ static void drm_cmdline_test_force_d_only(struct kunit *test)
KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_OFF);
}
static void drm_cmdline_test_margin_only(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "m";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_interlace_only(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "i";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_res(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -149,48 +131,12 @@ static void drm_cmdline_test_res(struct kunit *test)
KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED);
}
static void drm_cmdline_test_res_missing_x(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "x480";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_res_missing_y(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "1024x";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_res_bad_y(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "1024xtest";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_res_missing_y_bpp(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "1024x-24";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_res_vesa(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480M";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -212,7 +158,7 @@ static void drm_cmdline_test_res_vesa_rblank(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480MR";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -234,7 +180,7 @@ static void drm_cmdline_test_res_rblank(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480R";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -256,7 +202,7 @@ static void drm_cmdline_test_res_bpp(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480-24";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -274,21 +220,12 @@ static void drm_cmdline_test_res_bpp(struct kunit *test)
KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED);
}
static void drm_cmdline_test_res_bad_bpp(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480-test";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_res_refresh(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480@60";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -306,21 +243,12 @@ static void drm_cmdline_test_res_refresh(struct kunit *test)
KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED);
}
static void drm_cmdline_test_res_bad_refresh(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480@refresh";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_res_bpp_refresh(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480-24@60";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -344,7 +272,7 @@ static void drm_cmdline_test_res_bpp_refresh_interlaced(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480-24@60i";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -366,9 +294,9 @@ static void drm_cmdline_test_res_bpp_refresh_interlaced(struct kunit *test)
static void drm_cmdline_test_res_bpp_refresh_margins(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480-24@60m";
const char *cmdline = "720x480-24@60m";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -390,9 +318,9 @@ static void drm_cmdline_test_res_bpp_refresh_margins(struct kunit *test)
static void drm_cmdline_test_res_bpp_refresh_force_off(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480-24@60d";
const char *cmdline = "720x480-24@60d";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -411,21 +339,12 @@ static void drm_cmdline_test_res_bpp_refresh_force_off(struct kunit *test)
KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_OFF);
}
static void drm_cmdline_test_res_bpp_refresh_force_on_off(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480-24@60de";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_res_bpp_refresh_force_on(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480-24@60e";
const char *cmdline = "720x480-24@60e";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -449,7 +368,7 @@ static void drm_cmdline_test_res_bpp_refresh_force_on_analog(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480-24@60D";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -476,7 +395,7 @@ static void drm_cmdline_test_res_bpp_refresh_force_on_digital(struct kunit *test
};
const char *cmdline = "720x480-24@60D";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -524,7 +443,7 @@ static void drm_cmdline_test_res_margins_force_on(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480me";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -546,7 +465,7 @@ static void drm_cmdline_test_res_vesa_margins(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480Mm";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -563,30 +482,12 @@ static void drm_cmdline_test_res_vesa_margins(struct kunit *test)
KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED);
}
static void drm_cmdline_test_res_invalid_mode(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480f";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_res_bpp_wrong_place_mode(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480e-24";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_name(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "NTSC";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_STREQ(test, mode.name, "NTSC");
KUNIT_EXPECT_FALSE(test, mode.refresh_specified);
@ -598,7 +499,7 @@ static void drm_cmdline_test_name_bpp(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "NTSC-24";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_STREQ(test, mode.name, "NTSC");
@ -608,48 +509,12 @@ static void drm_cmdline_test_name_bpp(struct kunit *test)
KUNIT_EXPECT_EQ(test, mode.bpp, 24);
}
static void drm_cmdline_test_name_bpp_refresh(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "NTSC-24@60";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_name_refresh(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "NTSC@60";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_name_refresh_wrong_mode(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "NTSC@60m";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_name_refresh_invalid_mode(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "NTSC@60f";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_name_option(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "NTSC,rotate=180";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_STREQ(test, mode.name, "NTSC");
@ -661,7 +526,7 @@ static void drm_cmdline_test_name_bpp_option(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "NTSC-24,rotate=180";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_STREQ(test, mode.name, "NTSC");
@ -675,7 +540,7 @@ static void drm_cmdline_test_rotate_0(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480,rotate=0";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -698,7 +563,7 @@ static void drm_cmdline_test_rotate_90(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480,rotate=90";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -721,7 +586,7 @@ static void drm_cmdline_test_rotate_180(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480,rotate=180";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -744,7 +609,7 @@ static void drm_cmdline_test_rotate_270(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480,rotate=270";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -762,39 +627,12 @@ static void drm_cmdline_test_rotate_270(struct kunit *test)
KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED);
}
static void drm_cmdline_test_rotate_multiple(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480,rotate=0,rotate=90";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_rotate_invalid_val(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480,rotate=42";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_rotate_truncated(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480,rotate=";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_hmirror(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480,reflect_x";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -817,7 +655,7 @@ static void drm_cmdline_test_vmirror(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480,reflect_y";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -841,7 +679,7 @@ static void drm_cmdline_test_margin_options(struct kunit *test)
const char *cmdline =
"720x480,margin_right=14,margin_left=24,margin_bottom=36,margin_top=42";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -867,7 +705,7 @@ static void drm_cmdline_test_multiple_options(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480,rotate=270,reflect_x";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -885,21 +723,12 @@ static void drm_cmdline_test_multiple_options(struct kunit *test)
KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED);
}
static void drm_cmdline_test_invalid_option(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480,test=42";
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
}
static void drm_cmdline_test_bpp_extra_and_option(struct kunit *test)
{
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480-24e,rotate=180";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -923,7 +752,7 @@ static void drm_cmdline_test_extra_and_option(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "720x480e,rotate=180";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_TRUE(test, mode.specified);
KUNIT_EXPECT_EQ(test, mode.xres, 720);
@ -945,7 +774,7 @@ static void drm_cmdline_test_freestanding_options(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "margin_right=14,margin_left=24,margin_bottom=36,margin_top=42";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_FALSE(test, mode.specified);
KUNIT_EXPECT_FALSE(test, mode.refresh_specified);
@ -968,7 +797,7 @@ static void drm_cmdline_test_freestanding_force_e_and_options(struct kunit *test
struct drm_cmdline_mode mode = { };
const char *cmdline = "e,margin_right=14,margin_left=24,margin_bottom=36,margin_top=42";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_FALSE(test, mode.specified);
KUNIT_EXPECT_FALSE(test, mode.refresh_specified);
@ -991,7 +820,7 @@ static void drm_cmdline_test_panel_orientation(struct kunit *test)
struct drm_cmdline_mode mode = { };
const char *cmdline = "panel_orientation=upside_down";
KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
KUNIT_ASSERT_TRUE(test, drm_mode_parse_command_line_for_connector(cmdline,
&no_connector, &mode));
KUNIT_EXPECT_FALSE(test, mode.specified);
KUNIT_EXPECT_FALSE(test, mode.refresh_specified);
@ -1006,64 +835,148 @@ static void drm_cmdline_test_panel_orientation(struct kunit *test)
KUNIT_EXPECT_EQ(test, mode.force, DRM_FORCE_UNSPECIFIED);
}
struct drm_cmdline_invalid_test {
const char *name;
const char *cmdline;
};
static void drm_cmdline_test_invalid(struct kunit *test)
{
const struct drm_cmdline_invalid_test *params = test->param_value;
struct drm_cmdline_mode mode = { };
KUNIT_EXPECT_FALSE(test, drm_mode_parse_command_line_for_connector(params->cmdline,
&no_connector,
&mode));
}
static const struct drm_cmdline_invalid_test drm_cmdline_invalid_tests[] = {
{
.name = "margin_only",
.cmdline = "m",
},
{
.name = "interlace_only",
.cmdline = "i",
},
{
.name = "res_missing_x",
.cmdline = "x480",
},
{
.name = "res_missing_y",
.cmdline = "1024x",
},
{
.name = "res_bad_y",
.cmdline = "1024xtest",
},
{
.name = "res_missing_y_bpp",
.cmdline = "1024x-24",
},
{
.name = "res_bad_bpp",
.cmdline = "720x480-test",
},
{
.name = "res_bad_refresh",
.cmdline = "720x480@refresh",
},
{
.name = "res_bpp_refresh_force_on_off",
.cmdline = "720x480-24@60de",
},
{
.name = "res_invalid_mode",
.cmdline = "720x480f",
},
{
.name = "res_bpp_wrong_place_mode",
.cmdline = "720x480e-24",
},
{
.name = "name_bpp_refresh",
.cmdline = "NTSC-24@60",
},
{
.name = "name_refresh",
.cmdline = "NTSC@60",
},
{
.name = "name_refresh_wrong_mode",
.cmdline = "NTSC@60m",
},
{
.name = "name_refresh_invalid_mode",
.cmdline = "NTSC@60f",
},
{
.name = "rotate_multiple",
.cmdline = "720x480,rotate=0,rotate=90",
},
{
.name = "rotate_invalid_val",
.cmdline = "720x480,rotate=42",
},
{
.name = "rotate_truncated",
.cmdline = "720x480,rotate=",
},
{
.name = "invalid_option",
.cmdline = "720x480,test=42",
},
};
static void drm_cmdline_invalid_desc(const struct drm_cmdline_invalid_test *t,
char *desc)
{
sprintf(desc, "%s", t->name);
}
KUNIT_ARRAY_PARAM(drm_cmdline_invalid, drm_cmdline_invalid_tests, drm_cmdline_invalid_desc);
static struct kunit_case drm_cmdline_parser_tests[] = {
KUNIT_CASE(drm_cmdline_test_force_d_only),
KUNIT_CASE(drm_cmdline_test_force_D_only_dvi),
KUNIT_CASE(drm_cmdline_test_force_D_only_hdmi),
KUNIT_CASE(drm_cmdline_test_force_D_only_not_digital),
KUNIT_CASE(drm_cmdline_test_force_e_only),
KUNIT_CASE(drm_cmdline_test_margin_only),
KUNIT_CASE(drm_cmdline_test_interlace_only),
KUNIT_CASE(drm_cmdline_test_res),
KUNIT_CASE(drm_cmdline_test_res_missing_x),
KUNIT_CASE(drm_cmdline_test_res_missing_y),
KUNIT_CASE(drm_cmdline_test_res_bad_y),
KUNIT_CASE(drm_cmdline_test_res_missing_y_bpp),
KUNIT_CASE(drm_cmdline_test_res_vesa),
KUNIT_CASE(drm_cmdline_test_res_vesa_rblank),
KUNIT_CASE(drm_cmdline_test_res_rblank),
KUNIT_CASE(drm_cmdline_test_res_bpp),
KUNIT_CASE(drm_cmdline_test_res_bad_bpp),
KUNIT_CASE(drm_cmdline_test_res_refresh),
KUNIT_CASE(drm_cmdline_test_res_bad_refresh),
KUNIT_CASE(drm_cmdline_test_res_bpp_refresh),
KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_interlaced),
KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_margins),
KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_force_off),
KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_force_on_off),
KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_force_on),
KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_force_on_analog),
KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_force_on_digital),
KUNIT_CASE(drm_cmdline_test_res_bpp_refresh_interlaced_margins_force_on),
KUNIT_CASE(drm_cmdline_test_res_margins_force_on),
KUNIT_CASE(drm_cmdline_test_res_vesa_margins),
KUNIT_CASE(drm_cmdline_test_res_invalid_mode),
KUNIT_CASE(drm_cmdline_test_res_bpp_wrong_place_mode),
KUNIT_CASE(drm_cmdline_test_name),
KUNIT_CASE(drm_cmdline_test_name_bpp),
KUNIT_CASE(drm_cmdline_test_name_refresh),
KUNIT_CASE(drm_cmdline_test_name_bpp_refresh),
KUNIT_CASE(drm_cmdline_test_name_refresh_wrong_mode),
KUNIT_CASE(drm_cmdline_test_name_refresh_invalid_mode),
KUNIT_CASE(drm_cmdline_test_name_option),
KUNIT_CASE(drm_cmdline_test_name_bpp_option),
KUNIT_CASE(drm_cmdline_test_rotate_0),
KUNIT_CASE(drm_cmdline_test_rotate_90),
KUNIT_CASE(drm_cmdline_test_rotate_180),
KUNIT_CASE(drm_cmdline_test_rotate_270),
KUNIT_CASE(drm_cmdline_test_rotate_multiple),
KUNIT_CASE(drm_cmdline_test_rotate_invalid_val),
KUNIT_CASE(drm_cmdline_test_rotate_truncated),
KUNIT_CASE(drm_cmdline_test_hmirror),
KUNIT_CASE(drm_cmdline_test_vmirror),
KUNIT_CASE(drm_cmdline_test_margin_options),
KUNIT_CASE(drm_cmdline_test_multiple_options),
KUNIT_CASE(drm_cmdline_test_invalid_option),
KUNIT_CASE(drm_cmdline_test_bpp_extra_and_option),
KUNIT_CASE(drm_cmdline_test_extra_and_option),
KUNIT_CASE(drm_cmdline_test_freestanding_options),
KUNIT_CASE(drm_cmdline_test_freestanding_force_e_and_options),
KUNIT_CASE(drm_cmdline_test_panel_orientation),
KUNIT_CASE_PARAM(drm_cmdline_test_invalid, drm_cmdline_invalid_gen_params),
{}
};

View File

@ -309,6 +309,8 @@ static void bochs_hw_fini(struct drm_device *dev)
static void bochs_hw_blank(struct bochs_device *bochs, bool blank)
{
DRM_DEBUG_DRIVER("hw_blank %d\n", blank);
/* enable color bit (so VGA_IS1_RC access works) */
bochs_vga_writeb(bochs, VGA_MIS_W, VGA_MIS_COLOR);
/* discard ar_flip_flop */
(void)bochs_vga_readb(bochs, VGA_IS1_RC);
/* blank or unblank; we need only update index and set 0x20 */

View File

@ -518,6 +518,9 @@ out:
bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
const struct ttm_place *place)
{
struct ttm_resource *res = bo->resource;
struct ttm_device *bdev = bo->bdev;
dma_resv_assert_held(bo->base.resv);
if (bo->resource->mem_type == TTM_PL_SYSTEM)
return true;
@ -525,11 +528,7 @@ bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
/* Don't evict this BO if it's outside of the
* requested placement range
*/
if (place->fpfn >= (bo->resource->start + bo->resource->num_pages) ||
(place->lpfn && place->lpfn <= bo->resource->start))
return false;
return true;
return ttm_resource_intersects(bdev, res, place, bo->base.size);
}
EXPORT_SYMBOL(ttm_bo_eviction_valuable);

View File

@ -113,6 +113,37 @@ static void ttm_range_man_free(struct ttm_resource_manager *man,
kfree(node);
}
static bool ttm_range_man_intersects(struct ttm_resource_manager *man,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
struct drm_mm_node *node = &to_ttm_range_mgr_node(res)->mm_nodes[0];
u32 num_pages = PFN_UP(size);
/* Don't evict BOs outside of the requested placement range */
if (place->fpfn >= (node->start + num_pages) ||
(place->lpfn && place->lpfn <= node->start))
return false;
return true;
}
static bool ttm_range_man_compatible(struct ttm_resource_manager *man,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
struct drm_mm_node *node = &to_ttm_range_mgr_node(res)->mm_nodes[0];
u32 num_pages = PFN_UP(size);
if (node->start < place->fpfn ||
(place->lpfn && (node->start + num_pages) > place->lpfn))
return false;
return true;
}
static void ttm_range_man_debug(struct ttm_resource_manager *man,
struct drm_printer *printer)
{
@ -126,6 +157,8 @@ static void ttm_range_man_debug(struct ttm_resource_manager *man,
static const struct ttm_resource_manager_func ttm_range_manager_func = {
.alloc = ttm_range_man_alloc,
.free = ttm_range_man_free,
.intersects = ttm_range_man_intersects,
.compatible = ttm_range_man_compatible,
.debug = ttm_range_man_debug
};

View File

@ -253,10 +253,71 @@ void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res)
}
EXPORT_SYMBOL(ttm_resource_free);
/**
* ttm_resource_intersects - test for intersection
*
* @bdev: TTM device structure
* @res: The resource to test
* @place: The placement to test
* @size: How many bytes the new allocation needs.
*
* Test if @res intersects with @place and @size. Used for testing if evictions
* are valueable or not.
*
* Returns true if the res placement intersects with @place and @size.
*/
bool ttm_resource_intersects(struct ttm_device *bdev,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
struct ttm_resource_manager *man;
if (!res)
return false;
man = ttm_manager_type(bdev, res->mem_type);
if (!place || !man->func->intersects)
return true;
return man->func->intersects(man, res, place, size);
}
/**
* ttm_resource_compatible - test for compatibility
*
* @bdev: TTM device structure
* @res: The resource to test
* @place: The placement to test
* @size: How many bytes the new allocation needs.
*
* Test if @res compatible with @place and @size.
*
* Returns true if the res placement compatible with @place and @size.
*/
bool ttm_resource_compatible(struct ttm_device *bdev,
struct ttm_resource *res,
const struct ttm_place *place,
size_t size)
{
struct ttm_resource_manager *man;
if (!res || !place)
return false;
man = ttm_manager_type(bdev, res->mem_type);
if (!man->func->compatible)
return true;
return man->func->compatible(man, res, place, size);
}
static bool ttm_resource_places_compat(struct ttm_resource *res,
const struct ttm_place *places,
unsigned num_placement)
{
struct ttm_buffer_object *bo = res->bo;
struct ttm_device *bdev = bo->bdev;
unsigned i;
if (res->placement & TTM_PL_FLAG_TEMPORARY)
@ -265,8 +326,7 @@ static bool ttm_resource_places_compat(struct ttm_resource *res,
for (i = 0; i < num_placement; i++) {
const struct ttm_place *heap = &places[i];
if (res->start < heap->fpfn || (heap->lpfn &&
(res->start + res->num_pages) > heap->lpfn))
if (!ttm_resource_compatible(bdev, res, heap, bo->base.size))
continue;
if ((res->mem_type == heap->mem_type) &&

View File

@ -64,7 +64,7 @@ static int tve200_modeset_init(struct drm_device *dev)
struct tve200_drm_dev_private *priv = dev->dev_private;
struct drm_panel *panel;
struct drm_bridge *bridge;
int ret = 0;
int ret;
drm_mode_config_init(dev);
mode_config = &dev->mode_config;
@ -92,6 +92,7 @@ static int tve200_modeset_init(struct drm_device *dev)
* method to get the connector out of the bridge.
*/
dev_err(dev->dev, "the bridge is not a panel\n");
ret = -EINVAL;
goto out_bridge;
}

View File

@ -39,6 +39,7 @@
#include <drm/drm_atomic_uapi.h>
#include <drm/drm_fb_dma_helper.h>
#include <drm/drm_framebuffer.h>
#include <drm/drm_drv.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_vblank.h>
@ -295,10 +296,17 @@ struct drm_encoder *vc4_get_crtc_encoder(struct drm_crtc *crtc,
static void vc4_crtc_pixelvalve_reset(struct drm_crtc *crtc)
{
struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
struct drm_device *dev = crtc->dev;
int idx;
if (!drm_dev_enter(dev, &idx))
return;
/* The PV needs to be disabled before it can be flushed */
CRTC_WRITE(PV_CONTROL, CRTC_READ(PV_CONTROL) & ~PV_CONTROL_EN);
CRTC_WRITE(PV_CONTROL, CRTC_READ(PV_CONTROL) | PV_CONTROL_FIFO_CLR);
drm_dev_exit(idx);
}
static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encoder,
@ -321,6 +329,10 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode
u32 format = is_dsi1 ? PV_CONTROL_FORMAT_DSIV_24 : PV_CONTROL_FORMAT_24;
u8 ppc = pv_data->pixels_per_clock;
bool debug_dump_regs = false;
int idx;
if (!drm_dev_enter(dev, &idx))
return;
if (debug_dump_regs) {
struct drm_printer p = drm_info_printer(&vc4_crtc->pdev->dev);
@ -410,6 +422,8 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode
drm_crtc_index(crtc));
drm_print_regset32(&p, &vc4_crtc->regset);
}
drm_dev_exit(idx);
}
static void require_hvs_enabled(struct drm_device *dev)
@ -430,7 +444,10 @@ static int vc4_crtc_disable(struct drm_crtc *crtc,
struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
struct drm_device *dev = crtc->dev;
struct vc4_dev *vc4 = to_vc4_dev(dev);
int ret;
int idx, ret;
if (!drm_dev_enter(dev, &idx))
return -ENODEV;
CRTC_WRITE(PV_V_CONTROL,
CRTC_READ(PV_V_CONTROL) & ~PV_VCONTROL_VIDEN);
@ -464,6 +481,8 @@ static int vc4_crtc_disable(struct drm_crtc *crtc,
if (vc4_encoder && vc4_encoder->post_crtc_powerdown)
vc4_encoder->post_crtc_powerdown(encoder, state);
drm_dev_exit(idx);
return 0;
}
@ -588,10 +607,14 @@ static void vc4_crtc_atomic_enable(struct drm_crtc *crtc,
struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
struct drm_encoder *encoder = vc4_get_crtc_encoder(crtc, new_state);
struct vc4_encoder *vc4_encoder = to_vc4_encoder(encoder);
int idx;
drm_dbg(dev, "Enabling CRTC %s (%u) connected to Encoder %s (%u)",
crtc->name, crtc->base.id, encoder->name, encoder->base.id);
if (!drm_dev_enter(dev, &idx))
return;
require_hvs_enabled(dev);
/* Enable vblank irq handling before crtc is started otherwise
@ -619,6 +642,8 @@ static void vc4_crtc_atomic_enable(struct drm_crtc *crtc,
if (vc4_encoder->post_crtc_enable)
vc4_encoder->post_crtc_enable(encoder, state);
drm_dev_exit(idx);
}
static enum drm_mode_status vc4_crtc_mode_valid(struct drm_crtc *crtc,
@ -711,17 +736,31 @@ static int vc4_crtc_atomic_check(struct drm_crtc *crtc,
static int vc4_enable_vblank(struct drm_crtc *crtc)
{
struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
struct drm_device *dev = crtc->dev;
int idx;
if (!drm_dev_enter(dev, &idx))
return -ENODEV;
CRTC_WRITE(PV_INTEN, PV_INT_VFP_START);
drm_dev_exit(idx);
return 0;
}
static void vc4_disable_vblank(struct drm_crtc *crtc)
{
struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);
struct drm_device *dev = crtc->dev;
int idx;
if (!drm_dev_enter(dev, &idx))
return;
CRTC_WRITE(PV_INTEN, 0);
drm_dev_exit(idx);
}
static void vc4_crtc_handle_page_flip(struct vc4_crtc *vc4_crtc)

View File

@ -1425,7 +1425,7 @@ static void vc4_hdmi_encoder_pre_crtc_enable(struct drm_encoder *encoder,
mutex_lock(&vc4_hdmi->mutex);
if (!drm_dev_enter(drm, &idx))
return;
goto out;
if (vc4_hdmi->variant->csc_setup)
vc4_hdmi->variant->csc_setup(vc4_hdmi, conn_state, mode);
@ -1436,6 +1436,7 @@ static void vc4_hdmi_encoder_pre_crtc_enable(struct drm_encoder *encoder,
drm_dev_exit(idx);
out:
mutex_unlock(&vc4_hdmi->mutex);
}
@ -1455,7 +1456,7 @@ static void vc4_hdmi_encoder_post_crtc_enable(struct drm_encoder *encoder,
mutex_lock(&vc4_hdmi->mutex);
if (!drm_dev_enter(drm, &idx))
return;
goto out;
spin_lock_irqsave(&vc4_hdmi->hw_lock, flags);
@ -1516,6 +1517,8 @@ static void vc4_hdmi_encoder_post_crtc_enable(struct drm_encoder *encoder,
vc4_hdmi_enable_scrambling(encoder);
drm_dev_exit(idx);
out:
mutex_unlock(&vc4_hdmi->mutex);
}

View File

@ -71,11 +71,11 @@ void vc4_hvs_dump_state(struct vc4_hvs *hvs)
struct drm_printer p = drm_info_printer(&hvs->pdev->dev);
int idx, i;
drm_print_regset32(&p, &hvs->regset);
if (!drm_dev_enter(drm, &idx))
return;
drm_print_regset32(&p, &hvs->regset);
DRM_INFO("HVS ctx:\n");
for (i = 0; i < 64; i += 4) {
DRM_INFO("0x%08x (%s): 0x%08x 0x%08x 0x%08x 0x%08x\n",

View File

@ -19,6 +19,7 @@
#include <drm/drm_atomic_helper.h>
#include <drm/drm_atomic_uapi.h>
#include <drm/drm_blend.h>
#include <drm/drm_drv.h>
#include <drm/drm_fb_dma_helper.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_framebuffer.h>
@ -1219,6 +1220,10 @@ u32 vc4_plane_write_dlist(struct drm_plane *plane, u32 __iomem *dlist)
{
struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state);
int i;
int idx;
if (!drm_dev_enter(plane->dev, &idx))
goto out;
vc4_state->hw_dlist = dlist;
@ -1226,6 +1231,9 @@ u32 vc4_plane_write_dlist(struct drm_plane *plane, u32 __iomem *dlist)
for (i = 0; i < vc4_state->dlist_count; i++)
writel(vc4_state->dlist[i], &dlist[i]);
drm_dev_exit(idx);
out:
return vc4_state->dlist_count;
}
@ -1245,6 +1253,10 @@ void vc4_plane_async_set_fb(struct drm_plane *plane, struct drm_framebuffer *fb)
struct vc4_plane_state *vc4_state = to_vc4_plane_state(plane->state);
struct drm_gem_dma_object *bo = drm_fb_dma_get_gem_obj(fb, 0);
uint32_t addr;
int idx;
if (!drm_dev_enter(plane->dev, &idx))
return;
/* We're skipping the address adjustment for negative origin,
* because this is only called on the primary plane.
@ -1263,6 +1275,8 @@ void vc4_plane_async_set_fb(struct drm_plane *plane, struct drm_framebuffer *fb)
* also use our updated address.
*/
vc4_state->dlist[vc4_state->ptr0_offset] = addr;
drm_dev_exit(idx);
}
static void vc4_plane_atomic_async_update(struct drm_plane *plane,
@ -1271,6 +1285,10 @@ static void vc4_plane_atomic_async_update(struct drm_plane *plane,
struct drm_plane_state *new_plane_state = drm_atomic_get_new_plane_state(state,
plane);
struct vc4_plane_state *vc4_state, *new_vc4_state;
int idx;
if (!drm_dev_enter(plane->dev, &idx))
return;
swap(plane->state->fb, new_plane_state->fb);
plane->state->crtc_x = new_plane_state->crtc_x;
@ -1333,6 +1351,8 @@ static void vc4_plane_atomic_async_update(struct drm_plane *plane,
&vc4_state->hw_dlist[vc4_state->pos2_offset]);
writel(vc4_state->dlist[vc4_state->ptr0_offset],
&vc4_state->hw_dlist[vc4_state->ptr0_offset]);
drm_dev_exit(idx);
}
static int vc4_plane_atomic_async_check(struct drm_plane *plane,

View File

@ -171,8 +171,6 @@ struct vc4_vec {
struct clk *clock;
const struct vc4_vec_tv_mode *tv_mode;
struct debugfs_regset32 regset;
};
@ -194,7 +192,9 @@ enum vc4_vec_tv_mode_id {
struct vc4_vec_tv_mode {
const struct drm_display_mode *mode;
void (*mode_set)(struct vc4_vec *vec);
u32 config0;
u32 config1;
u32 custom_freq;
};
static const struct debugfs_reg32 vec_regs[] = {
@ -224,95 +224,41 @@ static const struct debugfs_reg32 vec_regs[] = {
VC4_REG32(VEC_DAC_MISC),
};
static void vc4_vec_ntsc_mode_set(struct vc4_vec *vec)
{
struct drm_device *drm = vec->connector.dev;
int idx;
if (!drm_dev_enter(drm, &idx))
return;
VEC_WRITE(VEC_CONFIG0, VEC_CONFIG0_NTSC_STD | VEC_CONFIG0_PDEN);
VEC_WRITE(VEC_CONFIG1, VEC_CONFIG1_C_CVBS_CVBS);
drm_dev_exit(idx);
}
static void vc4_vec_ntsc_j_mode_set(struct vc4_vec *vec)
{
struct drm_device *drm = vec->connector.dev;
int idx;
if (!drm_dev_enter(drm, &idx))
return;
VEC_WRITE(VEC_CONFIG0, VEC_CONFIG0_NTSC_STD);
VEC_WRITE(VEC_CONFIG1, VEC_CONFIG1_C_CVBS_CVBS);
drm_dev_exit(idx);
}
static const struct drm_display_mode ntsc_mode = {
DRM_MODE("720x480", DRM_MODE_TYPE_DRIVER, 13500,
720, 720 + 14, 720 + 14 + 64, 720 + 14 + 64 + 60, 0,
480, 480 + 3, 480 + 3 + 3, 480 + 3 + 3 + 16, 0,
480, 480 + 7, 480 + 7 + 6, 525, 0,
DRM_MODE_FLAG_INTERLACE)
};
static void vc4_vec_pal_mode_set(struct vc4_vec *vec)
{
struct drm_device *drm = vec->connector.dev;
int idx;
if (!drm_dev_enter(drm, &idx))
return;
VEC_WRITE(VEC_CONFIG0, VEC_CONFIG0_PAL_BDGHI_STD);
VEC_WRITE(VEC_CONFIG1, VEC_CONFIG1_C_CVBS_CVBS);
drm_dev_exit(idx);
}
static void vc4_vec_pal_m_mode_set(struct vc4_vec *vec)
{
struct drm_device *drm = vec->connector.dev;
int idx;
if (!drm_dev_enter(drm, &idx))
return;
VEC_WRITE(VEC_CONFIG0, VEC_CONFIG0_PAL_BDGHI_STD);
VEC_WRITE(VEC_CONFIG1,
VEC_CONFIG1_C_CVBS_CVBS | VEC_CONFIG1_CUSTOM_FREQ);
VEC_WRITE(VEC_FREQ3_2, 0x223b);
VEC_WRITE(VEC_FREQ1_0, 0x61d1);
drm_dev_exit(idx);
}
static const struct drm_display_mode pal_mode = {
DRM_MODE("720x576", DRM_MODE_TYPE_DRIVER, 13500,
720, 720 + 20, 720 + 20 + 64, 720 + 20 + 64 + 60, 0,
576, 576 + 2, 576 + 2 + 3, 576 + 2 + 3 + 20, 0,
576, 576 + 4, 576 + 4 + 6, 625, 0,
DRM_MODE_FLAG_INTERLACE)
};
static const struct vc4_vec_tv_mode vc4_vec_tv_modes[] = {
[VC4_VEC_TV_MODE_NTSC] = {
.mode = &ntsc_mode,
.mode_set = vc4_vec_ntsc_mode_set,
.config0 = VEC_CONFIG0_NTSC_STD | VEC_CONFIG0_PDEN,
.config1 = VEC_CONFIG1_C_CVBS_CVBS,
},
[VC4_VEC_TV_MODE_NTSC_J] = {
.mode = &ntsc_mode,
.mode_set = vc4_vec_ntsc_j_mode_set,
.config0 = VEC_CONFIG0_NTSC_STD,
.config1 = VEC_CONFIG1_C_CVBS_CVBS,
},
[VC4_VEC_TV_MODE_PAL] = {
.mode = &pal_mode,
.mode_set = vc4_vec_pal_mode_set,
.config0 = VEC_CONFIG0_PAL_BDGHI_STD,
.config1 = VEC_CONFIG1_C_CVBS_CVBS,
},
[VC4_VEC_TV_MODE_PAL_M] = {
.mode = &pal_mode,
.mode_set = vc4_vec_pal_m_mode_set,
.config0 = VEC_CONFIG0_PAL_BDGHI_STD,
.config1 = VEC_CONFIG1_C_CVBS_CVBS | VEC_CONFIG1_CUSTOM_FREQ,
.custom_freq = 0x223b61d1,
},
};
@ -368,14 +314,14 @@ static int vc4_vec_connector_init(struct drm_device *dev, struct vc4_vec *vec)
drm_object_attach_property(&connector->base,
dev->mode_config.tv_mode_property,
VC4_VEC_TV_MODE_NTSC);
vec->tv_mode = &vc4_vec_tv_modes[VC4_VEC_TV_MODE_NTSC];
drm_connector_attach_encoder(connector, &vec->encoder.base);
return 0;
}
static void vc4_vec_encoder_disable(struct drm_encoder *encoder)
static void vc4_vec_encoder_disable(struct drm_encoder *encoder,
struct drm_atomic_state *state)
{
struct drm_device *drm = encoder->dev;
struct vc4_vec *vec = encoder_to_vc4_vec(encoder);
@ -406,10 +352,16 @@ err_dev_exit:
drm_dev_exit(idx);
}
static void vc4_vec_encoder_enable(struct drm_encoder *encoder)
static void vc4_vec_encoder_enable(struct drm_encoder *encoder,
struct drm_atomic_state *state)
{
struct drm_device *drm = encoder->dev;
struct vc4_vec *vec = encoder_to_vc4_vec(encoder);
struct drm_connector *connector = &vec->connector;
struct drm_connector_state *conn_state =
drm_atomic_get_new_connector_state(state, connector);
const struct vc4_vec_tv_mode *tv_mode =
&vc4_vec_tv_modes[conn_state->tv.mode];
int idx, ret;
if (!drm_dev_enter(drm, &idx))
@ -468,7 +420,15 @@ static void vc4_vec_encoder_enable(struct drm_encoder *encoder)
/* Mask all interrupts. */
VEC_WRITE(VEC_MASK0, 0);
vec->tv_mode->mode_set(vec);
VEC_WRITE(VEC_CONFIG0, tv_mode->config0);
VEC_WRITE(VEC_CONFIG1, tv_mode->config1);
if (tv_mode->custom_freq) {
VEC_WRITE(VEC_FREQ3_2,
(tv_mode->custom_freq >> 16) & 0xffff);
VEC_WRITE(VEC_FREQ1_0,
tv_mode->custom_freq & 0xffff);
}
VEC_WRITE(VEC_DAC_MISC,
VEC_DAC_MISC_VID_ACT | VEC_DAC_MISC_DAC_RST_N);
@ -483,23 +443,6 @@ err_dev_exit:
drm_dev_exit(idx);
}
static bool vc4_vec_encoder_mode_fixup(struct drm_encoder *encoder,
const struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode)
{
return true;
}
static void vc4_vec_encoder_atomic_mode_set(struct drm_encoder *encoder,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state)
{
struct vc4_vec *vec = encoder_to_vc4_vec(encoder);
vec->tv_mode = &vc4_vec_tv_modes[conn_state->tv.mode];
}
static int vc4_vec_encoder_atomic_check(struct drm_encoder *encoder,
struct drm_crtc_state *crtc_state,
struct drm_connector_state *conn_state)
@ -516,11 +459,9 @@ static int vc4_vec_encoder_atomic_check(struct drm_encoder *encoder,
}
static const struct drm_encoder_helper_funcs vc4_vec_encoder_helper_funcs = {
.disable = vc4_vec_encoder_disable,
.enable = vc4_vec_encoder_enable,
.mode_fixup = vc4_vec_encoder_mode_fixup,
.atomic_check = vc4_vec_encoder_atomic_check,
.atomic_mode_set = vc4_vec_encoder_atomic_mode_set,
.atomic_disable = vc4_vec_encoder_disable,
.atomic_enable = vc4_vec_encoder_enable,
};
static int vc4_vec_late_register(struct drm_encoder *encoder)

View File

@ -2961,7 +2961,7 @@ int via_dma_cleanup(struct drm_device *dev)
drm_via_private_t *dev_priv =
(drm_via_private_t *) dev->dev_private;
if (dev_priv->ring.virtual_start) {
if (dev_priv->ring.virtual_start && dev_priv->mmio) {
via_cmdbuf_reset(dev_priv);
drm_legacy_ioremapfree(&dev_priv->ring.map, dev);

View File

@ -349,6 +349,8 @@ int virtio_gpu_modeset_init(struct virtio_gpu_device *vgdev)
vgdev->ddev->mode_config.max_width = XRES_MAX;
vgdev->ddev->mode_config.max_height = YRES_MAX;
vgdev->ddev->mode_config.fb_modifiers_not_supported = true;
for (i = 0 ; i < vgdev->num_scanouts; ++i)
vgdev_output_init(vgdev, i);

View File

@ -168,7 +168,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
* array contains any fence from a foreign context.
*/
ret = 0;
if (!dma_fence_match_context(in_fence, vgdev->fence_drv.context))
if (!dma_fence_match_context(in_fence, fence_ctx + ring_idx))
ret = dma_fence_wait(in_fence, true);
dma_fence_put(in_fence);

View File

@ -3,6 +3,7 @@ vkms-y := \
vkms_drv.o \
vkms_plane.o \
vkms_output.o \
vkms_formats.o \
vkms_crtc.o \
vkms_composer.o \
vkms_writeback.o

View File

@ -7,205 +7,187 @@
#include <drm/drm_fourcc.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_vblank.h>
#include <linux/minmax.h>
#include "vkms_drv.h"
static u32 get_pixel_from_buffer(int x, int y, const u8 *buffer,
const struct vkms_composer *composer)
static u16 pre_mul_blend_channel(u16 src, u16 dst, u16 alpha)
{
u32 pixel;
int src_offset = composer->offset + (y * composer->pitch)
+ (x * composer->cpp);
u32 new_color;
pixel = *(u32 *)&buffer[src_offset];
new_color = (src * 0xffff + dst * (0xffff - alpha));
return pixel;
return DIV_ROUND_CLOSEST(new_color, 0xffff);
}
/**
* compute_crc - Compute CRC value on output frame
* pre_mul_alpha_blend - alpha blending equation
* @src_frame_info: source framebuffer's metadata
* @stage_buffer: The line with the pixels from src_plane
* @output_buffer: A line buffer that receives all the blends output
*
* @vaddr: address to final framebuffer
* @composer: framebuffer's metadata
* Using the information from the `frame_info`, this blends only the
* necessary pixels from the `stage_buffer` to the `output_buffer`
* using premultiplied blend formula.
*
* returns CRC value computed using crc32 on the visible portion of
* the final framebuffer at vaddr_out
* The current DRM assumption is that pixel color values have been already
* pre-multiplied with the alpha channel values. See more
* drm_plane_create_blend_mode_property(). Also, this formula assumes a
* completely opaque background.
*/
static uint32_t compute_crc(const u8 *vaddr,
const struct vkms_composer *composer)
static void pre_mul_alpha_blend(struct vkms_frame_info *frame_info,
struct line_buffer *stage_buffer,
struct line_buffer *output_buffer)
{
int x, y;
u32 crc = 0, pixel = 0;
int x_src = composer->src.x1 >> 16;
int y_src = composer->src.y1 >> 16;
int h_src = drm_rect_height(&composer->src) >> 16;
int w_src = drm_rect_width(&composer->src) >> 16;
int x_dst = frame_info->dst.x1;
struct pixel_argb_u16 *out = output_buffer->pixels + x_dst;
struct pixel_argb_u16 *in = stage_buffer->pixels;
int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
stage_buffer->n_pixels);
for (y = y_src; y < y_src + h_src; ++y) {
for (x = x_src; x < x_src + w_src; ++x) {
pixel = get_pixel_from_buffer(x, y, vaddr, composer);
crc = crc32_le(crc, (void *)&pixel, sizeof(u32));
}
}
return crc;
}
static u8 blend_channel(u8 src, u8 dst, u8 alpha)
{
u32 pre_blend;
u8 new_color;
pre_blend = (src * 255 + dst * (255 - alpha));
/* Faster div by 255 */
new_color = ((pre_blend + ((pre_blend + 257) >> 8)) >> 8);
return new_color;
}
/**
* alpha_blend - alpha blending equation
* @argb_src: src pixel on premultiplied alpha mode
* @argb_dst: dst pixel completely opaque
*
* blend pixels using premultiplied blend formula. The current DRM assumption
* is that pixel color values have been already pre-multiplied with the alpha
* channel values. See more drm_plane_create_blend_mode_property(). Also, this
* formula assumes a completely opaque background.
*/
static void alpha_blend(const u8 *argb_src, u8 *argb_dst)
{
u8 alpha;
alpha = argb_src[3];
argb_dst[0] = blend_channel(argb_src[0], argb_dst[0], alpha);
argb_dst[1] = blend_channel(argb_src[1], argb_dst[1], alpha);
argb_dst[2] = blend_channel(argb_src[2], argb_dst[2], alpha);
}
/**
* x_blend - blending equation that ignores the pixel alpha
*
* overwrites RGB color value from src pixel to dst pixel.
*/
static void x_blend(const u8 *xrgb_src, u8 *xrgb_dst)
{
memcpy(xrgb_dst, xrgb_src, sizeof(u8) * 3);
}
/**
* blend - blend value at vaddr_src with value at vaddr_dst
* @vaddr_dst: destination address
* @vaddr_src: source address
* @dst_composer: destination framebuffer's metadata
* @src_composer: source framebuffer's metadata
* @pixel_blend: blending equation based on plane format
*
* Blend the vaddr_src value with the vaddr_dst value using a pixel blend
* equation according to the supported plane formats DRM_FORMAT_(A/XRGB8888)
* and clearing alpha channel to an completely opaque background. This function
* uses buffer's metadata to locate the new composite values at vaddr_dst.
*
* TODO: completely clear the primary plane (a = 0xff) before starting to blend
* pixel color values
*/
static void blend(void *vaddr_dst, void *vaddr_src,
struct vkms_composer *dst_composer,
struct vkms_composer *src_composer,
void (*pixel_blend)(const u8 *, u8 *))
{
int i, j, j_dst, i_dst;
int offset_src, offset_dst;
u8 *pixel_dst, *pixel_src;
int x_src = src_composer->src.x1 >> 16;
int y_src = src_composer->src.y1 >> 16;
int x_dst = src_composer->dst.x1;
int y_dst = src_composer->dst.y1;
int h_dst = drm_rect_height(&src_composer->dst);
int w_dst = drm_rect_width(&src_composer->dst);
int y_limit = y_src + h_dst;
int x_limit = x_src + w_dst;
for (i = y_src, i_dst = y_dst; i < y_limit; ++i) {
for (j = x_src, j_dst = x_dst; j < x_limit; ++j) {
offset_dst = dst_composer->offset
+ (i_dst * dst_composer->pitch)
+ (j_dst++ * dst_composer->cpp);
offset_src = src_composer->offset
+ (i * src_composer->pitch)
+ (j * src_composer->cpp);
pixel_src = (u8 *)(vaddr_src + offset_src);
pixel_dst = (u8 *)(vaddr_dst + offset_dst);
pixel_blend(pixel_src, pixel_dst);
/* clearing alpha channel (0xff)*/
pixel_dst[3] = 0xff;
}
i_dst++;
for (int x = 0; x < x_limit; x++) {
out[x].a = (u16)0xffff;
out[x].r = pre_mul_blend_channel(in[x].r, out[x].r, in[x].a);
out[x].g = pre_mul_blend_channel(in[x].g, out[x].g, in[x].a);
out[x].b = pre_mul_blend_channel(in[x].b, out[x].b, in[x].a);
}
}
static void compose_plane(struct vkms_composer *primary_composer,
struct vkms_composer *plane_composer,
void *vaddr_out)
static bool check_y_limit(struct vkms_frame_info *frame_info, int y)
{
struct drm_framebuffer *fb = &plane_composer->fb;
void *vaddr;
void (*pixel_blend)(const u8 *p_src, u8 *p_dst);
if (y >= frame_info->dst.y1 && y < frame_info->dst.y2)
return true;
if (WARN_ON(iosys_map_is_null(&plane_composer->map[0])))
return;
vaddr = plane_composer->map[0].vaddr;
if (fb->format->format == DRM_FORMAT_ARGB8888)
pixel_blend = &alpha_blend;
else
pixel_blend = &x_blend;
blend(vaddr_out, vaddr, primary_composer, plane_composer, pixel_blend);
return false;
}
static int compose_active_planes(void **vaddr_out,
struct vkms_composer *primary_composer,
struct vkms_crtc_state *crtc_state)
static void fill_background(const struct pixel_argb_u16 *background_color,
struct line_buffer *output_buffer)
{
struct drm_framebuffer *fb = &primary_composer->fb;
struct drm_gem_object *gem_obj = drm_gem_fb_get_obj(fb, 0);
const void *vaddr;
int i;
for (size_t i = 0; i < output_buffer->n_pixels; i++)
output_buffer->pixels[i] = *background_color;
}
if (!*vaddr_out) {
*vaddr_out = kvzalloc(gem_obj->size, GFP_KERNEL);
if (!*vaddr_out) {
DRM_ERROR("Cannot allocate memory for output frame.");
return -ENOMEM;
/**
* @wb_frame_info: The writeback frame buffer metadata
* @crtc_state: The crtc state
* @crc32: The crc output of the final frame
* @output_buffer: A buffer of a row that will receive the result of the blend(s)
* @stage_buffer: The line with the pixels from plane being blend to the output
*
* This function blends the pixels (Using the `pre_mul_alpha_blend`)
* from all planes, calculates the crc32 of the output from the former step,
* and, if necessary, convert and store the output to the writeback buffer.
*/
static void blend(struct vkms_writeback_job *wb,
struct vkms_crtc_state *crtc_state,
u32 *crc32, struct line_buffer *stage_buffer,
struct line_buffer *output_buffer, size_t row_size)
{
struct vkms_plane_state **plane = crtc_state->active_planes;
u32 n_active_planes = crtc_state->num_active_planes;
const struct pixel_argb_u16 background_color = { .a = 0xffff };
size_t crtc_y_limit = crtc_state->base.crtc->mode.vdisplay;
for (size_t y = 0; y < crtc_y_limit; y++) {
fill_background(&background_color, output_buffer);
/* The active planes are composed associatively in z-order. */
for (size_t i = 0; i < n_active_planes; i++) {
if (!check_y_limit(plane[i]->frame_info, y))
continue;
plane[i]->plane_read(stage_buffer, plane[i]->frame_info, y);
pre_mul_alpha_blend(plane[i]->frame_info, stage_buffer,
output_buffer);
}
*crc32 = crc32_le(*crc32, (void *)output_buffer->pixels, row_size);
if (wb)
wb->wb_write(&wb->wb_frame_info, output_buffer, y);
}
}
if (WARN_ON(iosys_map_is_null(&primary_composer->map[0])))
return -EINVAL;
static int check_format_funcs(struct vkms_crtc_state *crtc_state,
struct vkms_writeback_job *active_wb)
{
struct vkms_plane_state **planes = crtc_state->active_planes;
u32 n_active_planes = crtc_state->num_active_planes;
vaddr = primary_composer->map[0].vaddr;
for (size_t i = 0; i < n_active_planes; i++)
if (!planes[i]->plane_read)
return -1;
memcpy(*vaddr_out, vaddr, gem_obj->size);
/* If there are other planes besides primary, we consider the active
* planes should be in z-order and compose them associatively:
* ((primary <- overlay) <- cursor)
*/
for (i = 1; i < crtc_state->num_active_planes; i++)
compose_plane(primary_composer,
crtc_state->active_planes[i]->composer,
*vaddr_out);
if (active_wb && !active_wb->wb_write)
return -1;
return 0;
}
static int check_iosys_map(struct vkms_crtc_state *crtc_state)
{
struct vkms_plane_state **plane_state = crtc_state->active_planes;
u32 n_active_planes = crtc_state->num_active_planes;
for (size_t i = 0; i < n_active_planes; i++)
if (iosys_map_is_null(&plane_state[i]->frame_info->map[0]))
return -1;
return 0;
}
static int compose_active_planes(struct vkms_writeback_job *active_wb,
struct vkms_crtc_state *crtc_state,
u32 *crc32)
{
size_t line_width, pixel_size = sizeof(struct pixel_argb_u16);
struct line_buffer output_buffer, stage_buffer;
int ret = 0;
/*
* This check exists so we can call `crc32_le` for the entire line
* instead doing it for each channel of each pixel in case
* `struct `pixel_argb_u16` had any gap added by the compiler
* between the struct fields.
*/
static_assert(sizeof(struct pixel_argb_u16) == 8);
if (WARN_ON(check_iosys_map(crtc_state)))
return -EINVAL;
if (WARN_ON(check_format_funcs(crtc_state, active_wb)))
return -EINVAL;
line_width = crtc_state->base.crtc->mode.hdisplay;
stage_buffer.n_pixels = line_width;
output_buffer.n_pixels = line_width;
stage_buffer.pixels = kvmalloc(line_width * pixel_size, GFP_KERNEL);
if (!stage_buffer.pixels) {
DRM_ERROR("Cannot allocate memory for the output line buffer");
return -ENOMEM;
}
output_buffer.pixels = kvmalloc(line_width * pixel_size, GFP_KERNEL);
if (!output_buffer.pixels) {
DRM_ERROR("Cannot allocate memory for intermediate line buffer");
ret = -ENOMEM;
goto free_stage_buffer;
}
blend(active_wb, crtc_state, crc32, &stage_buffer,
&output_buffer, line_width * pixel_size);
kvfree(output_buffer.pixels);
free_stage_buffer:
kvfree(stage_buffer.pixels);
return ret;
}
/**
* vkms_composer_worker - ordered work_struct to compute CRC
*
@ -221,13 +203,11 @@ void vkms_composer_worker(struct work_struct *work)
struct vkms_crtc_state,
composer_work);
struct drm_crtc *crtc = crtc_state->base.crtc;
struct vkms_writeback_job *active_wb = crtc_state->active_writeback;
struct vkms_output *out = drm_crtc_to_vkms_output(crtc);
struct vkms_composer *primary_composer = NULL;
struct vkms_plane_state *act_plane = NULL;
bool crc_pending, wb_pending;
void *vaddr_out = NULL;
u32 crc32 = 0;
u64 frame_start, frame_end;
u32 crc32 = 0;
int ret;
spin_lock_irq(&out->composer_lock);
@ -247,35 +227,19 @@ void vkms_composer_worker(struct work_struct *work)
if (!crc_pending)
return;
if (crtc_state->num_active_planes >= 1) {
act_plane = crtc_state->active_planes[0];
if (act_plane->base.base.plane->type == DRM_PLANE_TYPE_PRIMARY)
primary_composer = act_plane->composer;
}
if (!primary_composer)
return;
if (wb_pending)
vaddr_out = crtc_state->active_writeback->data[0].vaddr;
ret = compose_active_planes(active_wb, crtc_state, &crc32);
else
ret = compose_active_planes(NULL, crtc_state, &crc32);
ret = compose_active_planes(&vaddr_out, primary_composer,
crtc_state);
if (ret) {
if (ret == -EINVAL && !wb_pending)
kvfree(vaddr_out);
if (ret)
return;
}
crc32 = compute_crc(vaddr_out, primary_composer);
if (wb_pending) {
drm_writeback_signal_completion(&out->wb_connector, 0);
spin_lock_irq(&out->composer_lock);
crtc_state->wb_pending = false;
spin_unlock_irq(&out->composer_lock);
} else {
kvfree(vaddr_out);
}
/*

View File

@ -23,28 +23,41 @@
#define NUM_OVERLAY_PLANES 8
struct vkms_writeback_job {
struct iosys_map map[DRM_FORMAT_MAX_PLANES];
struct iosys_map data[DRM_FORMAT_MAX_PLANES];
};
struct vkms_composer {
struct drm_framebuffer fb;
struct vkms_frame_info {
struct drm_framebuffer *fb;
struct drm_rect src, dst;
struct iosys_map map[4];
struct iosys_map map[DRM_FORMAT_MAX_PLANES];
unsigned int offset;
unsigned int pitch;
unsigned int cpp;
};
struct pixel_argb_u16 {
u16 a, r, g, b;
};
struct line_buffer {
size_t n_pixels;
struct pixel_argb_u16 *pixels;
};
struct vkms_writeback_job {
struct iosys_map data[DRM_FORMAT_MAX_PLANES];
struct vkms_frame_info wb_frame_info;
void (*wb_write)(struct vkms_frame_info *frame_info,
const struct line_buffer *buffer, int y);
};
/**
* vkms_plane_state - Driver specific plane state
* @base: base plane state
* @composer: data required for composing computation
* @frame_info: data required for composing computation
*/
struct vkms_plane_state {
struct drm_shadow_plane_state base;
struct vkms_composer *composer;
struct vkms_frame_info *frame_info;
void (*plane_read)(struct line_buffer *buffer,
const struct vkms_frame_info *frame_info, int y);
};
struct vkms_plane {

View File

@ -0,0 +1,301 @@
// SPDX-License-Identifier: GPL-2.0+
#include <drm/drm_rect.h>
#include <linux/minmax.h>
#include "vkms_formats.h"
/* The following macros help doing fixed point arithmetic. */
/*
* With Fixed-Point scale 15 we have 17 and 15 bits of integer and fractional
* parts respectively.
* | 0000 0000 0000 0000 0.000 0000 0000 0000 |
* 31 0
*/
#define SHIFT 15
#define INT_TO_FIXED(a) ((a) << SHIFT)
#define FIXED_MUL(a, b) ((s32)(((s64)(a) * (b)) >> SHIFT))
#define FIXED_DIV(a, b) ((s32)(((s64)(a) << SHIFT) / (b)))
/* This macro converts a fixed point number to int, and round half up it */
#define FIXED_TO_INT_ROUND(a) (((a) + (1 << (SHIFT - 1))) >> SHIFT)
#define INT_TO_FIXED_DIV(a, b) (FIXED_DIV(INT_TO_FIXED(a), INT_TO_FIXED(b)))
#define INT_TO_FIXED_DIV(a, b) (FIXED_DIV(INT_TO_FIXED(a), INT_TO_FIXED(b)))
static size_t pixel_offset(const struct vkms_frame_info *frame_info, int x, int y)
{
return frame_info->offset + (y * frame_info->pitch)
+ (x * frame_info->cpp);
}
/*
* packed_pixels_addr - Get the pointer to pixel of a given pair of coordinates
*
* @frame_info: Buffer metadata
* @x: The x(width) coordinate of the 2D buffer
* @y: The y(Heigth) coordinate of the 2D buffer
*
* Takes the information stored in the frame_info, a pair of coordinates, and
* returns the address of the first color channel.
* This function assumes the channels are packed together, i.e. a color channel
* comes immediately after another in the memory. And therefore, this function
* doesn't work for YUV with chroma subsampling (e.g. YUV420 and NV21).
*/
static void *packed_pixels_addr(const struct vkms_frame_info *frame_info,
int x, int y)
{
size_t offset = pixel_offset(frame_info, x, y);
return (u8 *)frame_info->map[0].vaddr + offset;
}
static void *get_packed_src_addr(const struct vkms_frame_info *frame_info, int y)
{
int x_src = frame_info->src.x1 >> 16;
int y_src = y - frame_info->dst.y1 + (frame_info->src.y1 >> 16);
return packed_pixels_addr(frame_info, x_src, y_src);
}
static void ARGB8888_to_argb_u16(struct line_buffer *stage_buffer,
const struct vkms_frame_info *frame_info, int y)
{
struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
u8 *src_pixels = get_packed_src_addr(frame_info, y);
int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
stage_buffer->n_pixels);
for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
/*
* The 257 is the "conversion ratio". This number is obtained by the
* (2^16 - 1) / (2^8 - 1) division. Which, in this case, tries to get
* the best color value in a pixel format with more possibilities.
* A similar idea applies to others RGB color conversions.
*/
out_pixels[x].a = (u16)src_pixels[3] * 257;
out_pixels[x].r = (u16)src_pixels[2] * 257;
out_pixels[x].g = (u16)src_pixels[1] * 257;
out_pixels[x].b = (u16)src_pixels[0] * 257;
}
}
static void XRGB8888_to_argb_u16(struct line_buffer *stage_buffer,
const struct vkms_frame_info *frame_info, int y)
{
struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
u8 *src_pixels = get_packed_src_addr(frame_info, y);
int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
stage_buffer->n_pixels);
for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
out_pixels[x].a = (u16)0xffff;
out_pixels[x].r = (u16)src_pixels[2] * 257;
out_pixels[x].g = (u16)src_pixels[1] * 257;
out_pixels[x].b = (u16)src_pixels[0] * 257;
}
}
static void ARGB16161616_to_argb_u16(struct line_buffer *stage_buffer,
const struct vkms_frame_info *frame_info,
int y)
{
struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
u16 *src_pixels = get_packed_src_addr(frame_info, y);
int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
stage_buffer->n_pixels);
for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
out_pixels[x].a = le16_to_cpu(src_pixels[3]);
out_pixels[x].r = le16_to_cpu(src_pixels[2]);
out_pixels[x].g = le16_to_cpu(src_pixels[1]);
out_pixels[x].b = le16_to_cpu(src_pixels[0]);
}
}
static void XRGB16161616_to_argb_u16(struct line_buffer *stage_buffer,
const struct vkms_frame_info *frame_info,
int y)
{
struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
u16 *src_pixels = get_packed_src_addr(frame_info, y);
int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
stage_buffer->n_pixels);
for (size_t x = 0; x < x_limit; x++, src_pixels += 4) {
out_pixels[x].a = (u16)0xffff;
out_pixels[x].r = le16_to_cpu(src_pixels[2]);
out_pixels[x].g = le16_to_cpu(src_pixels[1]);
out_pixels[x].b = le16_to_cpu(src_pixels[0]);
}
}
static void RGB565_to_argb_u16(struct line_buffer *stage_buffer,
const struct vkms_frame_info *frame_info, int y)
{
struct pixel_argb_u16 *out_pixels = stage_buffer->pixels;
u16 *src_pixels = get_packed_src_addr(frame_info, y);
int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
stage_buffer->n_pixels);
s32 fp_rb_ratio = INT_TO_FIXED_DIV(65535, 31);
s32 fp_g_ratio = INT_TO_FIXED_DIV(65535, 63);
for (size_t x = 0; x < x_limit; x++, src_pixels++) {
u16 rgb_565 = le16_to_cpu(*src_pixels);
s32 fp_r = INT_TO_FIXED((rgb_565 >> 11) & 0x1f);
s32 fp_g = INT_TO_FIXED((rgb_565 >> 5) & 0x3f);
s32 fp_b = INT_TO_FIXED(rgb_565 & 0x1f);
out_pixels[x].a = (u16)0xffff;
out_pixels[x].r = FIXED_TO_INT_ROUND(FIXED_MUL(fp_r, fp_rb_ratio));
out_pixels[x].g = FIXED_TO_INT_ROUND(FIXED_MUL(fp_g, fp_g_ratio));
out_pixels[x].b = FIXED_TO_INT_ROUND(FIXED_MUL(fp_b, fp_rb_ratio));
}
}
/*
* The following functions take an line of argb_u16 pixels from the
* src_buffer, convert them to a specific format, and store them in the
* destination.
*
* They are used in the `compose_active_planes` to convert and store a line
* from the src_buffer to the writeback buffer.
*/
static void argb_u16_to_ARGB8888(struct vkms_frame_info *frame_info,
const struct line_buffer *src_buffer, int y)
{
int x_dst = frame_info->dst.x1;
u8 *dst_pixels = packed_pixels_addr(frame_info, x_dst, y);
struct pixel_argb_u16 *in_pixels = src_buffer->pixels;
int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
src_buffer->n_pixels);
for (size_t x = 0; x < x_limit; x++, dst_pixels += 4) {
/*
* This sequence below is important because the format's byte order is
* in little-endian. In the case of the ARGB8888 the memory is
* organized this way:
*
* | Addr | = blue channel
* | Addr + 1 | = green channel
* | Addr + 2 | = Red channel
* | Addr + 3 | = Alpha channel
*/
dst_pixels[3] = DIV_ROUND_CLOSEST(in_pixels[x].a, 257);
dst_pixels[2] = DIV_ROUND_CLOSEST(in_pixels[x].r, 257);
dst_pixels[1] = DIV_ROUND_CLOSEST(in_pixels[x].g, 257);
dst_pixels[0] = DIV_ROUND_CLOSEST(in_pixels[x].b, 257);
}
}
static void argb_u16_to_XRGB8888(struct vkms_frame_info *frame_info,
const struct line_buffer *src_buffer, int y)
{
int x_dst = frame_info->dst.x1;
u8 *dst_pixels = packed_pixels_addr(frame_info, x_dst, y);
struct pixel_argb_u16 *in_pixels = src_buffer->pixels;
int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
src_buffer->n_pixels);
for (size_t x = 0; x < x_limit; x++, dst_pixels += 4) {
dst_pixels[3] = 0xff;
dst_pixels[2] = DIV_ROUND_CLOSEST(in_pixels[x].r, 257);
dst_pixels[1] = DIV_ROUND_CLOSEST(in_pixels[x].g, 257);
dst_pixels[0] = DIV_ROUND_CLOSEST(in_pixels[x].b, 257);
}
}
static void argb_u16_to_ARGB16161616(struct vkms_frame_info *frame_info,
const struct line_buffer *src_buffer, int y)
{
int x_dst = frame_info->dst.x1;
u16 *dst_pixels = packed_pixels_addr(frame_info, x_dst, y);
struct pixel_argb_u16 *in_pixels = src_buffer->pixels;
int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
src_buffer->n_pixels);
for (size_t x = 0; x < x_limit; x++, dst_pixels += 4) {
dst_pixels[3] = cpu_to_le16(in_pixels[x].a);
dst_pixels[2] = cpu_to_le16(in_pixels[x].r);
dst_pixels[1] = cpu_to_le16(in_pixels[x].g);
dst_pixels[0] = cpu_to_le16(in_pixels[x].b);
}
}
static void argb_u16_to_XRGB16161616(struct vkms_frame_info *frame_info,
const struct line_buffer *src_buffer, int y)
{
int x_dst = frame_info->dst.x1;
u16 *dst_pixels = packed_pixels_addr(frame_info, x_dst, y);
struct pixel_argb_u16 *in_pixels = src_buffer->pixels;
int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
src_buffer->n_pixels);
for (size_t x = 0; x < x_limit; x++, dst_pixels += 4) {
dst_pixels[3] = 0xffff;
dst_pixels[2] = cpu_to_le16(in_pixels[x].r);
dst_pixels[1] = cpu_to_le16(in_pixels[x].g);
dst_pixels[0] = cpu_to_le16(in_pixels[x].b);
}
}
static void argb_u16_to_RGB565(struct vkms_frame_info *frame_info,
const struct line_buffer *src_buffer, int y)
{
int x_dst = frame_info->dst.x1;
u16 *dst_pixels = packed_pixels_addr(frame_info, x_dst, y);
struct pixel_argb_u16 *in_pixels = src_buffer->pixels;
int x_limit = min_t(size_t, drm_rect_width(&frame_info->dst),
src_buffer->n_pixels);
s32 fp_rb_ratio = INT_TO_FIXED_DIV(65535, 31);
s32 fp_g_ratio = INT_TO_FIXED_DIV(65535, 63);
for (size_t x = 0; x < x_limit; x++, dst_pixels++) {
s32 fp_r = INT_TO_FIXED(in_pixels[x].r);
s32 fp_g = INT_TO_FIXED(in_pixels[x].g);
s32 fp_b = INT_TO_FIXED(in_pixels[x].b);
u16 r = FIXED_TO_INT_ROUND(FIXED_DIV(fp_r, fp_rb_ratio));
u16 g = FIXED_TO_INT_ROUND(FIXED_DIV(fp_g, fp_g_ratio));
u16 b = FIXED_TO_INT_ROUND(FIXED_DIV(fp_b, fp_rb_ratio));
*dst_pixels = cpu_to_le16(r << 11 | g << 5 | b);
}
}
void *get_frame_to_line_function(u32 format)
{
switch (format) {
case DRM_FORMAT_ARGB8888:
return &ARGB8888_to_argb_u16;
case DRM_FORMAT_XRGB8888:
return &XRGB8888_to_argb_u16;
case DRM_FORMAT_ARGB16161616:
return &ARGB16161616_to_argb_u16;
case DRM_FORMAT_XRGB16161616:
return &XRGB16161616_to_argb_u16;
case DRM_FORMAT_RGB565:
return &RGB565_to_argb_u16;
default:
return NULL;
}
}
void *get_line_to_frame_function(u32 format)
{
switch (format) {
case DRM_FORMAT_ARGB8888:
return &argb_u16_to_ARGB8888;
case DRM_FORMAT_XRGB8888:
return &argb_u16_to_XRGB8888;
case DRM_FORMAT_ARGB16161616:
return &argb_u16_to_ARGB16161616;
case DRM_FORMAT_XRGB16161616:
return &argb_u16_to_XRGB16161616;
case DRM_FORMAT_RGB565:
return &argb_u16_to_RGB565;
default:
return NULL;
}
}

View File

@ -0,0 +1,12 @@
/* SPDX-License-Identifier: GPL-2.0+ */
#ifndef _VKMS_FORMATS_H_
#define _VKMS_FORMATS_H_
#include "vkms_drv.h"
void *get_frame_to_line_function(u32 format);
void *get_line_to_frame_function(u32 format);
#endif /* _VKMS_FORMATS_H_ */

View File

@ -9,34 +9,40 @@
#include <drm/drm_gem_framebuffer_helper.h>
#include "vkms_drv.h"
#include "vkms_formats.h"
static const u32 vkms_formats[] = {
DRM_FORMAT_XRGB8888,
DRM_FORMAT_XRGB16161616,
DRM_FORMAT_RGB565
};
static const u32 vkms_plane_formats[] = {
DRM_FORMAT_ARGB8888,
DRM_FORMAT_XRGB8888
DRM_FORMAT_XRGB8888,
DRM_FORMAT_XRGB16161616,
DRM_FORMAT_ARGB16161616,
DRM_FORMAT_RGB565
};
static struct drm_plane_state *
vkms_plane_duplicate_state(struct drm_plane *plane)
{
struct vkms_plane_state *vkms_state;
struct vkms_composer *composer;
struct vkms_frame_info *frame_info;
vkms_state = kzalloc(sizeof(*vkms_state), GFP_KERNEL);
if (!vkms_state)
return NULL;
composer = kzalloc(sizeof(*composer), GFP_KERNEL);
if (!composer) {
DRM_DEBUG_KMS("Couldn't allocate composer\n");
frame_info = kzalloc(sizeof(*frame_info), GFP_KERNEL);
if (!frame_info) {
DRM_DEBUG_KMS("Couldn't allocate frame_info\n");
kfree(vkms_state);
return NULL;
}
vkms_state->composer = composer;
vkms_state->frame_info = frame_info;
__drm_gem_duplicate_shadow_plane_state(plane, &vkms_state->base);
@ -49,16 +55,16 @@ static void vkms_plane_destroy_state(struct drm_plane *plane,
struct vkms_plane_state *vkms_state = to_vkms_plane_state(old_state);
struct drm_crtc *crtc = vkms_state->base.base.crtc;
if (crtc) {
if (crtc && vkms_state->frame_info->fb) {
/* dropping the reference we acquired in
* vkms_primary_plane_update()
*/
if (drm_framebuffer_read_refcount(&vkms_state->composer->fb))
drm_framebuffer_put(&vkms_state->composer->fb);
if (drm_framebuffer_read_refcount(vkms_state->frame_info->fb))
drm_framebuffer_put(vkms_state->frame_info->fb);
}
kfree(vkms_state->composer);
vkms_state->composer = NULL;
kfree(vkms_state->frame_info);
vkms_state->frame_info = NULL;
__drm_gem_destroy_shadow_plane_state(&vkms_state->base);
kfree(vkms_state);
@ -98,7 +104,8 @@ static void vkms_plane_atomic_update(struct drm_plane *plane,
struct vkms_plane_state *vkms_plane_state;
struct drm_shadow_plane_state *shadow_plane_state;
struct drm_framebuffer *fb = new_state->fb;
struct vkms_composer *composer;
struct vkms_frame_info *frame_info;
u32 fmt = fb->format->format;
if (!new_state->crtc || !fb)
return;
@ -106,15 +113,16 @@ static void vkms_plane_atomic_update(struct drm_plane *plane,
vkms_plane_state = to_vkms_plane_state(new_state);
shadow_plane_state = &vkms_plane_state->base;
composer = vkms_plane_state->composer;
memcpy(&composer->src, &new_state->src, sizeof(struct drm_rect));
memcpy(&composer->dst, &new_state->dst, sizeof(struct drm_rect));
memcpy(&composer->fb, fb, sizeof(struct drm_framebuffer));
memcpy(&composer->map, &shadow_plane_state->data, sizeof(composer->map));
drm_framebuffer_get(&composer->fb);
composer->offset = fb->offsets[0];
composer->pitch = fb->pitches[0];
composer->cpp = fb->format->cpp[0];
frame_info = vkms_plane_state->frame_info;
memcpy(&frame_info->src, &new_state->src, sizeof(struct drm_rect));
memcpy(&frame_info->dst, &new_state->dst, sizeof(struct drm_rect));
frame_info->fb = fb;
memcpy(&frame_info->map, &shadow_plane_state->data, sizeof(frame_info->map));
drm_framebuffer_get(frame_info->fb);
frame_info->offset = fb->offsets[0];
frame_info->pitch = fb->pitches[0];
frame_info->cpp = fb->format->cpp[0];
vkms_plane_state->plane_read = get_frame_to_line_function(fmt);
}
static int vkms_plane_atomic_check(struct drm_plane *plane,

View File

@ -12,9 +12,13 @@
#include <drm/drm_gem_shmem_helper.h>
#include "vkms_drv.h"
#include "vkms_formats.h"
static const u32 vkms_wb_formats[] = {
DRM_FORMAT_XRGB8888,
DRM_FORMAT_XRGB16161616,
DRM_FORMAT_ARGB16161616,
DRM_FORMAT_RGB565
};
static const struct drm_connector_funcs vkms_wb_connector_funcs = {
@ -31,6 +35,7 @@ static int vkms_wb_encoder_atomic_check(struct drm_encoder *encoder,
{
struct drm_framebuffer *fb;
const struct drm_display_mode *mode = &crtc_state->mode;
int ret;
if (!conn_state->writeback_job || !conn_state->writeback_job->fb)
return 0;
@ -42,11 +47,9 @@ static int vkms_wb_encoder_atomic_check(struct drm_encoder *encoder,
return -EINVAL;
}
if (fb->format->format != vkms_wb_formats[0]) {
DRM_DEBUG_KMS("Invalid pixel format %p4cc\n",
&fb->format->format);
return -EINVAL;
}
ret = drm_atomic_helper_check_wb_encoder_state(encoder, conn_state);
if (ret < 0)
return ret;
return 0;
}
@ -76,12 +79,15 @@ static int vkms_wb_prepare_job(struct drm_writeback_connector *wb_connector,
if (!vkmsjob)
return -ENOMEM;
ret = drm_gem_fb_vmap(job->fb, vkmsjob->map, vkmsjob->data);
ret = drm_gem_fb_vmap(job->fb, vkmsjob->wb_frame_info.map, vkmsjob->data);
if (ret) {
DRM_ERROR("vmap failed: %d\n", ret);
goto err_kfree;
}
vkmsjob->wb_frame_info.fb = job->fb;
drm_framebuffer_get(vkmsjob->wb_frame_info.fb);
job->priv = vkmsjob;
return 0;
@ -100,7 +106,9 @@ static void vkms_wb_cleanup_job(struct drm_writeback_connector *connector,
if (!job->fb)
return;
drm_gem_fb_vunmap(job->fb, vkmsjob->map);
drm_gem_fb_vunmap(job->fb, vkmsjob->wb_frame_info.map);
drm_framebuffer_put(vkmsjob->wb_frame_info.fb);
vkmsdev = drm_device_to_vkms_device(job->fb->dev);
vkms_set_composer(&vkmsdev->output, false);
@ -117,17 +125,32 @@ static void vkms_wb_atomic_commit(struct drm_connector *conn,
struct drm_writeback_connector *wb_conn = &output->wb_connector;
struct drm_connector_state *conn_state = wb_conn->base.state;
struct vkms_crtc_state *crtc_state = output->composer_state;
struct drm_framebuffer *fb = connector_state->writeback_job->fb;
u16 crtc_height = crtc_state->base.crtc->mode.vdisplay;
u16 crtc_width = crtc_state->base.crtc->mode.hdisplay;
struct vkms_writeback_job *active_wb;
struct vkms_frame_info *wb_frame_info;
u32 wb_format = fb->format->format;
if (!conn_state)
return;
vkms_set_composer(&vkmsdev->output, true);
active_wb = conn_state->writeback_job->priv;
wb_frame_info = &active_wb->wb_frame_info;
spin_lock_irq(&output->composer_lock);
crtc_state->active_writeback = conn_state->writeback_job->priv;
crtc_state->active_writeback = active_wb;
wb_frame_info->offset = fb->offsets[0];
wb_frame_info->pitch = fb->pitches[0];
wb_frame_info->cpp = fb->format->cpp[0];
crtc_state->wb_pending = true;
spin_unlock_irq(&output->composer_lock);
drm_writeback_queue_job(wb_conn, connector_state);
active_wb->wb_write = get_line_to_frame_function(wb_format);
drm_rect_init(&wb_frame_info->src, 0, 0, crtc_width, crtc_height);
drm_rect_init(&wb_frame_info->dst, 0, 0, crtc_width, crtc_height);
}
static const struct drm_connector_helper_funcs vkms_wb_conn_helper_funcs = {

View File

@ -21,6 +21,7 @@
* DEALINGS IN THE SOFTWARE.
*/
#include <drm/display/drm_dp.h>
#include <linux/bitops.h>
#include <linux/bug.h>
#include <linux/errno.h>
@ -381,12 +382,34 @@ static int hdmi_audio_infoframe_check_only(const struct hdmi_audio_infoframe *fr
*
* Returns 0 on success or a negative error code on failure.
*/
int hdmi_audio_infoframe_check(struct hdmi_audio_infoframe *frame)
int hdmi_audio_infoframe_check(const struct hdmi_audio_infoframe *frame)
{
return hdmi_audio_infoframe_check_only(frame);
}
EXPORT_SYMBOL(hdmi_audio_infoframe_check);
static void
hdmi_audio_infoframe_pack_payload(const struct hdmi_audio_infoframe *frame,
u8 *buffer)
{
u8 channels;
if (frame->channels >= 2)
channels = frame->channels - 1;
else
channels = 0;
buffer[0] = ((frame->coding_type & 0xf) << 4) | (channels & 0x7);
buffer[1] = ((frame->sample_frequency & 0x7) << 2) |
(frame->sample_size & 0x3);
buffer[2] = frame->coding_type_ext & 0x1f;
buffer[3] = frame->channel_allocation;
buffer[4] = (frame->level_shift_value & 0xf) << 3;
if (frame->downmix_inhibit)
buffer[4] |= BIT(7);
}
/**
* hdmi_audio_infoframe_pack_only() - write HDMI audio infoframe to binary buffer
* @frame: HDMI audio infoframe
@ -404,7 +427,6 @@ EXPORT_SYMBOL(hdmi_audio_infoframe_check);
ssize_t hdmi_audio_infoframe_pack_only(const struct hdmi_audio_infoframe *frame,
void *buffer, size_t size)
{
unsigned char channels;
u8 *ptr = buffer;
size_t length;
int ret;
@ -420,28 +442,13 @@ ssize_t hdmi_audio_infoframe_pack_only(const struct hdmi_audio_infoframe *frame,
memset(buffer, 0, size);
if (frame->channels >= 2)
channels = frame->channels - 1;
else
channels = 0;
ptr[0] = frame->type;
ptr[1] = frame->version;
ptr[2] = frame->length;
ptr[3] = 0; /* checksum */
/* start infoframe payload */
ptr += HDMI_INFOFRAME_HEADER_SIZE;
ptr[0] = ((frame->coding_type & 0xf) << 4) | (channels & 0x7);
ptr[1] = ((frame->sample_frequency & 0x7) << 2) |
(frame->sample_size & 0x3);
ptr[2] = frame->coding_type_ext & 0x1f;
ptr[3] = frame->channel_allocation;
ptr[4] = (frame->level_shift_value & 0xf) << 3;
if (frame->downmix_inhibit)
ptr[4] |= BIT(7);
hdmi_audio_infoframe_pack_payload(frame,
ptr + HDMI_INFOFRAME_HEADER_SIZE);
hdmi_infoframe_set_checksum(buffer, length);
@ -479,6 +486,43 @@ ssize_t hdmi_audio_infoframe_pack(struct hdmi_audio_infoframe *frame,
}
EXPORT_SYMBOL(hdmi_audio_infoframe_pack);
/**
* hdmi_audio_infoframe_pack_for_dp - Pack a HDMI Audio infoframe for DisplayPort
*
* @frame: HDMI Audio infoframe
* @sdp: Secondary data packet for DisplayPort.
* @dp_version: DisplayPort version to be encoded in the header
*
* Packs a HDMI Audio Infoframe to be sent over DisplayPort. This function
* fills the secondary data packet to be used for DisplayPort.
*
* Return: Number of total written bytes or a negative errno on failure.
*/
ssize_t
hdmi_audio_infoframe_pack_for_dp(const struct hdmi_audio_infoframe *frame,
struct dp_sdp *sdp, u8 dp_version)
{
int ret;
ret = hdmi_audio_infoframe_check(frame);
if (ret)
return ret;
memset(sdp->db, 0, sizeof(sdp->db));
/* Secondary-data packet header */
sdp->sdp_header.HB0 = 0;
sdp->sdp_header.HB1 = frame->type;
sdp->sdp_header.HB2 = DP_SDP_AUDIO_INFOFRAME_HB2;
sdp->sdp_header.HB3 = (dp_version & 0x3f) << 2;
hdmi_audio_infoframe_pack_payload(frame, sdp->db);
/* Return size = frame length + four HB for sdp_header */
return frame->length + 4;
}
EXPORT_SYMBOL(hdmi_audio_infoframe_pack_for_dp);
/**
* hdmi_vendor_infoframe_init() - initialize an HDMI vendor infoframe
* @frame: HDMI vendor infoframe

View File

@ -1536,6 +1536,8 @@ enum drm_dp_phy {
#define DP_SDP_VSC_EXT_CEA 0x21 /* DP 1.4 */
/* 0x80+ CEA-861 infoframe types */
#define DP_SDP_AUDIO_INFOFRAME_HB2 0x1b
/**
* struct dp_sdp_header - DP secondary data packet header
* @HB0: Secondary Data Packet ID

View File

@ -69,6 +69,8 @@ bool drm_dp_128b132b_link_training_failed(const u8 link_status[DP_LINK_STATUS_SI
u8 drm_dp_link_rate_to_bw_code(int link_rate);
int drm_dp_bw_code_to_link_rate(u8 link_bw);
const char *drm_dp_phy_name(enum drm_dp_phy dp_phy);
/**
* struct drm_dp_vsc_sdp - drm DP VSC SDP
*

Some files were not shown because too many files have changed in this diff Show More