Merge tag 'drm-intel-gt-next-2023-05-24' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

UAPI Changes:

- New getparam for querying PXP support and load status

Cross-subsystem Changes:

- GSC/MEI proxy driver

Driver Changes:

Fixes/improvements/new stuff:

- Avoid clearing pre-allocated framebuffers with the TTM backend (Nirmoy Das)
- Implement framebuffer mmap support (Nirmoy Das)
- Disable sampler indirect state in bindless heap (Lionel Landwerlin)
- Avoid out-of-bounds access when loading HuC (Lucas De Marchi)
- Actually return an error if GuC version range check fails (John Harrison)
- Get mutex and rpm ref just once in hwm_power_max_write (Ashutosh Dixit)
- Disable PL1 power limit when loading GuC firmware (Ashutosh Dixit)
- Block in hwmon while waiting for GuC reset to complete (Ashutosh Dixit)
- Provide sysfs for SLPC efficient freq (Vinay Belgaumkar)
- Add support for total context runtime for GuC back-end (Umesh Nerlige Ramappa)
- Enable fdinfo for GuC backends (Umesh Nerlige Ramappa)
- Don't capture Gen8 regs on Xe devices (John Harrison)
- Fix error capture for virtual engines (John Harrison)
- Track patch level versions on reduced version firmware files (John Harrison)
- Decode another GuC load failure case (John Harrison)
- GuC loading and firmware table handling fixes (John Harrison)
- Fix confused register capture list creation (John Harrison)
- Dump error capture to kernel log (John Harrison)
- Dump error capture to dmesg on CTB error (John Harrison)
- Disable rps_boost debugfs when SLPC is used (Vinay Belgaumkar)

Future platform enablement:

- Disable stolen memory backed FB for A0 [mtl] (Nirmoy Das)
- Various refactors for multi-tile enablement (Andi Shyti, Tejas Upadhyay)
- Extend Wa_22011802037 to MTL A-step (Madhumitha Tolakanahalli Pradeep)
- WA to clear RDOP clock gating [mtl] (Haridhar Kalvala)
- Set has_llc=0 [mtl] (Fei Yang)
- Define MOCS and PAT tables for MTL (Madhumitha Tolakanahalli Pradeep)
- Add PTE encode function [mtl] (Fei Yang)
- fix mocs selftest [mtl] (Fei Yang)
- Workaround coherency issue for Media [mtl] (Fei Yang)
- Add workaround 14018778641 [mtl] (Tejas Upadhyay)
- Implement Wa_14019141245 [mtl] (Radhakrishna Sripada)
- Fix the wa number for Wa_22016670082 [mtl] (Radhakrishna Sripada)
- Use correct huge page manager for MTL (Jonathan Cavitt)
- GSC/MEI support for Meteorlake (Alexander Usyskin, Daniele Ceraolo Spurio)
- Define GuC firmware version for MTL (John Harrison)
- Drop FLAT CCS check [mtl] (Pallavi Mishra)
- Add MTL for remapping CCS FBs [mtl] (Clint Taylor)
- Meteorlake PXP enablement (Alan Previn)
- Do not enable render power-gating on MTL (Andrzej Hajda)
- Add MTL performance tuning changes (Radhakrishna Sripada)
- Extend Wa_16014892111 to MTL A-step (Radhakrishna Sripada)
- PMU multi-tile support (Tvrtko Ursulin)
- End support for set caching ioctl [mtl] (Fei Yang)

Driver refactors:

- Use i915 instead of dev_priv insied the file_priv structure (Andi Shyti)
- Use proper parameter naming in for_each_engine() (Andi Shyti)
- Use gt_err for GT info (Tejas Upadhyay)
- Consolidate duplicated capture list code (John Harrison)
- Capture list naming clean up (John Harrison)
- Use kernel-doc -Werror when CONFIG_DRM_I915_WERROR=y (Jani Nikula)
- Preparation for using PAT index (Fei Yang)
- Use pat_index instead of cache_level (Fei Yang)

Miscellaneous:

- Fix memory leaks in i915 selftests (Cong Liu)
- Record GT error for gt failure (Tejas Upadhyay)
- Migrate platform-dependent mock hugepage selftests to live (Jonathan Cavitt)
- Update the SLPC selftest (Vinay Belgaumkar)
- Throw out set() wrapper (Jani Nikula)
- Large driver kernel doc cleanup (Jani Nikula)
- Fix probe injection CI failures after recent change (John Harrison)
- Make unexpected firmware versions an error in debug builds (John Harrison)
- Silence UBSAN uninitialized bool variable warning (Ashutosh Dixit)
- Fix memory leaks in function live_nop_switch (Cong Liu)

Merges:

- Merge drm/drm-next into drm-intel-gt-next (Joonas Lahtinen)

Signed-off-by: Dave Airlie <airlied@redhat.com>

# Conflicts:
#	drivers/gpu/drm/i915/gt/uc/intel_guc_capture.c
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/ZG5SxCWRSkZhTDtY@tursulin-desk
This commit is contained in:
Dave Airlie 2023-05-29 06:21:50 +10:00
commit 85d712f033
131 changed files with 4259 additions and 1201 deletions

View File

@ -194,6 +194,7 @@ i915-y += \
# general-purpose microcontroller (GuC) support
i915-y += \
gt/uc/intel_gsc_fw.o \
gt/uc/intel_gsc_proxy.o \
gt/uc/intel_gsc_uc.o \
gt/uc/intel_gsc_uc_heci_cmd_submit.o\
gt/uc/intel_guc.o \
@ -338,6 +339,7 @@ i915-y += \
i915-$(CONFIG_DRM_I915_PXP) += \
pxp/intel_pxp_cmd.o \
pxp/intel_pxp_debugfs.o \
pxp/intel_pxp_gsccs.o \
pxp/intel_pxp_irq.o \
pxp/intel_pxp_pm.o \
pxp/intel_pxp_session.o
@ -373,7 +375,7 @@ obj-$(CONFIG_DRM_I915_GVT_KVMGT) += kvmgt.o
#
# Enable locally for CONFIG_DRM_I915_WERROR=y. See also scripts/Makefile.build
ifdef CONFIG_DRM_I915_WERROR
cmd_checkdoc = $(srctree)/scripts/kernel-doc -none $<
cmd_checkdoc = $(srctree)/scripts/kernel-doc -none -Werror $<
endif
# header test
@ -388,7 +390,7 @@ always-$(CONFIG_DRM_I915_WERROR) += \
quiet_cmd_hdrtest = HDRTEST $(patsubst %.hdrtest,%.h,$@)
cmd_hdrtest = $(CC) $(filter-out $(CFLAGS_GCOV), $(c_flags)) -S -o /dev/null -x c /dev/null -include $<; \
$(srctree)/scripts/kernel-doc -none $<; touch $@
$(srctree)/scripts/kernel-doc -none -Werror $<; touch $@
$(obj)/%.hdrtest: $(src)/%.h FORCE
$(call if_changed_dep,hdrtest)

View File

@ -43,24 +43,24 @@ static void gen8_set_pte(void __iomem *addr, gen8_pte_t pte)
static void dpt_insert_page(struct i915_address_space *vm,
dma_addr_t addr,
u64 offset,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
struct i915_dpt *dpt = i915_vm_to_dpt(vm);
gen8_pte_t __iomem *base = dpt->iomem;
gen8_set_pte(base + offset / I915_GTT_PAGE_SIZE,
vm->pte_encode(addr, level, flags));
vm->pte_encode(addr, pat_index, flags));
}
static void dpt_insert_entries(struct i915_address_space *vm,
struct i915_vma_resource *vma_res,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
struct i915_dpt *dpt = i915_vm_to_dpt(vm);
gen8_pte_t __iomem *base = dpt->iomem;
const gen8_pte_t pte_encode = vm->pte_encode(0, level, flags);
const gen8_pte_t pte_encode = vm->pte_encode(0, pat_index, flags);
struct sgt_iter sgt_iter;
dma_addr_t addr;
int i;
@ -83,7 +83,7 @@ static void dpt_clear_range(struct i915_address_space *vm,
static void dpt_bind_vma(struct i915_address_space *vm,
struct i915_vm_pt_stash *stash,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags)
{
u32 pte_flags;
@ -98,7 +98,7 @@ static void dpt_bind_vma(struct i915_address_space *vm,
if (vma_res->bi.lmem)
pte_flags |= PTE_LM;
vm->insert_entries(vm, vma_res, cache_level, pte_flags);
vm->insert_entries(vm, vma_res, pat_index, pte_flags);
vma_res->page_sizes_gtt = I915_GTT_PAGE_SIZE;
@ -300,7 +300,7 @@ intel_dpt_create(struct intel_framebuffer *fb)
vm->vma_ops.bind_vma = dpt_bind_vma;
vm->vma_ops.unbind_vma = dpt_unbind_vma;
vm->pte_encode = gen8_ggtt_pte_encode;
vm->pte_encode = vm->gt->ggtt->vm.pte_encode;
dpt->obj = dpt_obj;
dpt->obj->is_dpt = true;

View File

@ -1190,7 +1190,8 @@ bool intel_fb_needs_pot_stride_remap(const struct intel_framebuffer *fb)
{
struct drm_i915_private *i915 = to_i915(fb->base.dev);
return IS_ALDERLAKE_P(i915) && intel_fb_uses_dpt(&fb->base);
return (IS_ALDERLAKE_P(i915) || DISPLAY_VER(i915) >= 14) &&
intel_fb_uses_dpt(&fb->base);
}
static int intel_fb_pitch(const struct intel_framebuffer *fb, int color_plane, unsigned int rotation)
@ -1326,9 +1327,10 @@ plane_view_scanout_stride(const struct intel_framebuffer *fb, int color_plane,
unsigned int tile_width,
unsigned int src_stride_tiles, unsigned int dst_stride_tiles)
{
struct drm_i915_private *i915 = to_i915(fb->base.dev);
unsigned int stride_tiles;
if (IS_ALDERLAKE_P(to_i915(fb->base.dev)))
if (IS_ALDERLAKE_P(i915) || DISPLAY_VER(i915) >= 14)
stride_tiles = src_stride_tiles;
else
stride_tiles = dst_stride_tiles;
@ -1522,7 +1524,8 @@ static void intel_fb_view_init(struct drm_i915_private *i915, struct intel_fb_vi
memset(view, 0, sizeof(*view));
view->gtt.type = view_type;
if (view_type == I915_GTT_VIEW_REMAPPED && IS_ALDERLAKE_P(i915))
if (view_type == I915_GTT_VIEW_REMAPPED &&
(IS_ALDERLAKE_P(i915) || DISPLAY_VER(i915) >= 14))
view->gtt.remapped.plane_alignment = SZ_2M / PAGE_SIZE;
}

View File

@ -40,8 +40,10 @@
#include <drm/drm_crtc.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_gem_framebuffer_helper.h>
#include "gem/i915_gem_lmem.h"
#include "gem/i915_gem_mman.h"
#include "i915_drv.h"
#include "intel_display_types.h"
@ -67,6 +69,11 @@ struct intel_fbdev {
struct mutex hpd_lock;
};
static struct intel_fbdev *to_intel_fbdev(struct drm_fb_helper *fb_helper)
{
return container_of(fb_helper, struct intel_fbdev, helper);
}
static struct intel_frontbuffer *to_frontbuffer(struct intel_fbdev *ifbdev)
{
return ifbdev->fb->frontbuffer;
@ -79,9 +86,7 @@ static void intel_fbdev_invalidate(struct intel_fbdev *ifbdev)
static int intel_fbdev_set_par(struct fb_info *info)
{
struct drm_fb_helper *fb_helper = info->par;
struct intel_fbdev *ifbdev =
container_of(fb_helper, struct intel_fbdev, helper);
struct intel_fbdev *ifbdev = to_intel_fbdev(info->par);
int ret;
ret = drm_fb_helper_set_par(info);
@ -93,9 +98,7 @@ static int intel_fbdev_set_par(struct fb_info *info)
static int intel_fbdev_blank(int blank, struct fb_info *info)
{
struct drm_fb_helper *fb_helper = info->par;
struct intel_fbdev *ifbdev =
container_of(fb_helper, struct intel_fbdev, helper);
struct intel_fbdev *ifbdev = to_intel_fbdev(info->par);
int ret;
ret = drm_fb_helper_blank(blank, info);
@ -108,9 +111,7 @@ static int intel_fbdev_blank(int blank, struct fb_info *info)
static int intel_fbdev_pan_display(struct fb_var_screeninfo *var,
struct fb_info *info)
{
struct drm_fb_helper *fb_helper = info->par;
struct intel_fbdev *ifbdev =
container_of(fb_helper, struct intel_fbdev, helper);
struct intel_fbdev *ifbdev = to_intel_fbdev(info->par);
int ret;
ret = drm_fb_helper_pan_display(var, info);
@ -120,6 +121,15 @@ static int intel_fbdev_pan_display(struct fb_var_screeninfo *var,
return ret;
}
static int intel_fbdev_mmap(struct fb_info *info, struct vm_area_struct *vma)
{
struct intel_fbdev *fbdev = to_intel_fbdev(info->par);
struct drm_gem_object *bo = drm_gem_fb_get_obj(&fbdev->fb->base, 0);
struct drm_i915_gem_object *obj = to_intel_bo(bo);
return i915_gem_fb_mmap(obj, vma);
}
static const struct fb_ops intelfb_ops = {
.owner = THIS_MODULE,
DRM_FB_HELPER_DEFAULT_OPS,
@ -131,13 +141,13 @@ static const struct fb_ops intelfb_ops = {
.fb_imageblit = drm_fb_helper_cfb_imageblit,
.fb_pan_display = intel_fbdev_pan_display,
.fb_blank = intel_fbdev_blank,
.fb_mmap = intel_fbdev_mmap,
};
static int intelfb_alloc(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
struct intel_fbdev *ifbdev =
container_of(helper, struct intel_fbdev, helper);
struct intel_fbdev *ifbdev = to_intel_fbdev(helper);
struct drm_framebuffer *fb;
struct drm_device *dev = helper->dev;
struct drm_i915_private *dev_priv = to_i915(dev);
@ -163,7 +173,8 @@ static int intelfb_alloc(struct drm_fb_helper *helper,
obj = ERR_PTR(-ENODEV);
if (HAS_LMEM(dev_priv)) {
obj = i915_gem_object_create_lmem(dev_priv, size,
I915_BO_ALLOC_CONTIGUOUS);
I915_BO_ALLOC_CONTIGUOUS |
I915_BO_ALLOC_USER);
} else {
/*
* If the FB is too big, just don't use it since fbdev is not very
@ -193,8 +204,7 @@ static int intelfb_alloc(struct drm_fb_helper *helper,
static int intelfb_create(struct drm_fb_helper *helper,
struct drm_fb_helper_surface_size *sizes)
{
struct intel_fbdev *ifbdev =
container_of(helper, struct intel_fbdev, helper);
struct intel_fbdev *ifbdev = to_intel_fbdev(helper);
struct intel_framebuffer *intel_fb = ifbdev->fb;
struct drm_device *dev = helper->dev;
struct drm_i915_private *dev_priv = to_i915(dev);

View File

@ -110,7 +110,9 @@ initial_plane_vma(struct drm_i915_private *i915,
size * 2 > i915->dsm.usable_size)
return NULL;
obj = i915_gem_object_create_region_at(mem, phys_base, size, 0);
obj = i915_gem_object_create_region_at(mem, phys_base, size,
I915_BO_ALLOC_USER |
I915_BO_PREALLOC);
if (IS_ERR(obj))
return NULL;

View File

@ -27,8 +27,15 @@ static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj)
if (IS_DGFX(i915))
return false;
return !(obj->cache_level == I915_CACHE_NONE ||
obj->cache_level == I915_CACHE_WT);
/*
* For objects created by userspace through GEM_CREATE with pat_index
* set by set_pat extension, i915_gem_object_has_cache_level() will
* always return true, because the coherency of such object is managed
* by userspace. Othereise the call here would fall back to checking
* whether the object is un-cached or write-through.
*/
return !(i915_gem_object_has_cache_level(obj, I915_CACHE_NONE) ||
i915_gem_object_has_cache_level(obj, I915_CACHE_WT));
}
bool i915_gem_cpu_write_needs_clflush(struct drm_i915_gem_object *obj)
@ -267,7 +274,13 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
{
int ret;
if (obj->cache_level == cache_level)
/*
* For objects created by userspace through GEM_CREATE with pat_index
* set by set_pat extension, simply return 0 here without touching
* the cache setting, because such objects should have an immutable
* cache setting by desgin and always managed by userspace.
*/
if (i915_gem_object_has_cache_level(obj, cache_level))
return 0;
ret = i915_gem_object_wait(obj,
@ -278,10 +291,8 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
return ret;
/* Always invalidate stale cachelines */
if (obj->cache_level != cache_level) {
i915_gem_object_set_cache_coherency(obj, cache_level);
obj->cache_dirty = true;
}
i915_gem_object_set_cache_coherency(obj, cache_level);
obj->cache_dirty = true;
/* The cache-level will be applied when each vma is rebound. */
return i915_gem_object_unbind(obj,
@ -306,20 +317,22 @@ int i915_gem_get_caching_ioctl(struct drm_device *dev, void *data,
goto out;
}
switch (obj->cache_level) {
case I915_CACHE_LLC:
case I915_CACHE_L3_LLC:
args->caching = I915_CACHING_CACHED;
break;
case I915_CACHE_WT:
args->caching = I915_CACHING_DISPLAY;
break;
default:
args->caching = I915_CACHING_NONE;
break;
/*
* This ioctl should be disabled for the objects with pat_index
* set by user space.
*/
if (obj->pat_set_by_user) {
err = -EOPNOTSUPP;
goto out;
}
if (i915_gem_object_has_cache_level(obj, I915_CACHE_LLC) ||
i915_gem_object_has_cache_level(obj, I915_CACHE_L3_LLC))
args->caching = I915_CACHING_CACHED;
else if (i915_gem_object_has_cache_level(obj, I915_CACHE_WT))
args->caching = I915_CACHING_DISPLAY;
else
args->caching = I915_CACHING_NONE;
out:
rcu_read_unlock();
return err;
@ -337,6 +350,9 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
if (IS_DGFX(i915))
return -ENODEV;
if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 70))
return -EOPNOTSUPP;
switch (args->caching) {
case I915_CACHING_NONE:
level = I915_CACHE_NONE;
@ -364,6 +380,15 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
if (!obj)
return -ENOENT;
/*
* This ioctl should be disabled for the objects with pat_index
* set by user space.
*/
if (obj->pat_set_by_user) {
ret = -EOPNOTSUPP;
goto out;
}
/*
* The caching mode of proxy object is handled by its generator, and
* not allowed to be changed by userspace.

View File

@ -640,9 +640,15 @@ static inline int use_cpu_reloc(const struct reloc_cache *cache,
if (DBG_FORCE_RELOC == FORCE_GTT_RELOC)
return false;
/*
* For objects created by userspace through GEM_CREATE with pat_index
* set by set_pat extension, i915_gem_object_has_cache_level() always
* return true, otherwise the call would fall back to checking whether
* the object is un-cached.
*/
return (cache->has_llc ||
obj->cache_dirty ||
obj->cache_level != I915_CACHE_NONE);
!i915_gem_object_has_cache_level(obj, I915_CACHE_NONE));
}
static int eb_reserve_vma(struct i915_execbuffer *eb,
@ -1324,7 +1330,10 @@ static void *reloc_iomap(struct i915_vma *batch,
if (drm_mm_node_allocated(&cache->node)) {
ggtt->vm.insert_page(&ggtt->vm,
i915_gem_object_get_dma_address(obj, page),
offset, I915_CACHE_NONE, 0);
offset,
i915_gem_get_pat_index(ggtt->vm.i915,
I915_CACHE_NONE),
0);
} else {
offset += page << PAGE_SHIFT;
}
@ -1464,7 +1473,7 @@ eb_relocate_entry(struct i915_execbuffer *eb,
reloc_cache_unmap(&eb->reloc_cache);
mutex_lock(&vma->vm->mutex);
err = i915_vma_bind(target->vma,
target->vma->obj->cache_level,
target->vma->obj->pat_index,
PIN_GLOBAL, NULL, NULL);
mutex_unlock(&vma->vm->mutex);
reloc_cache_remap(&eb->reloc_cache, ev->vma->obj);

View File

@ -383,7 +383,16 @@ retry:
}
/* Access to snoopable pages through the GTT is incoherent. */
if (obj->cache_level != I915_CACHE_NONE && !HAS_LLC(i915)) {
/*
* For objects created by userspace through GEM_CREATE with pat_index
* set by set_pat extension, coherency is managed by userspace, make
* sure we don't fail handling the vm fault by calling
* i915_gem_object_has_cache_level() which always return true for such
* objects. Otherwise this helper function would fall back to checking
* whether the object is un-cached.
*/
if (!(i915_gem_object_has_cache_level(obj, I915_CACHE_NONE) ||
HAS_LLC(i915))) {
ret = -EFAULT;
goto err_unpin;
}
@ -927,53 +936,15 @@ static struct file *mmap_singleton(struct drm_i915_private *i915)
return file;
}
/*
* This overcomes the limitation in drm_gem_mmap's assignment of a
* drm_gem_object as the vma->vm_private_data. Since we need to
* be able to resolve multiple mmap offsets which could be tied
* to a single gem object.
*/
int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
static int
i915_gem_object_mmap(struct drm_i915_gem_object *obj,
struct i915_mmap_offset *mmo,
struct vm_area_struct *vma)
{
struct drm_vma_offset_node *node;
struct drm_file *priv = filp->private_data;
struct drm_device *dev = priv->minor->dev;
struct drm_i915_gem_object *obj = NULL;
struct i915_mmap_offset *mmo = NULL;
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct drm_device *dev = &i915->drm;
struct file *anon;
if (drm_dev_is_unplugged(dev))
return -ENODEV;
rcu_read_lock();
drm_vma_offset_lock_lookup(dev->vma_offset_manager);
node = drm_vma_offset_exact_lookup_locked(dev->vma_offset_manager,
vma->vm_pgoff,
vma_pages(vma));
if (node && drm_vma_node_is_allowed(node, priv)) {
/*
* Skip 0-refcnted objects as it is in the process of being
* destroyed and will be invalid when the vma manager lock
* is released.
*/
if (!node->driver_private) {
mmo = container_of(node, struct i915_mmap_offset, vma_node);
obj = i915_gem_object_get_rcu(mmo->obj);
GEM_BUG_ON(obj && obj->ops->mmap_ops);
} else {
obj = i915_gem_object_get_rcu
(container_of(node, struct drm_i915_gem_object,
base.vma_node));
GEM_BUG_ON(obj && !obj->ops->mmap_ops);
}
}
drm_vma_offset_unlock_lookup(dev->vma_offset_manager);
rcu_read_unlock();
if (!obj)
return node ? -EACCES : -EINVAL;
if (i915_gem_object_is_readonly(obj)) {
if (vma->vm_flags & VM_WRITE) {
i915_gem_object_put(obj);
@ -1005,7 +976,7 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
if (obj->ops->mmap_ops) {
vma->vm_page_prot = pgprot_decrypted(vm_get_page_prot(vma->vm_flags));
vma->vm_ops = obj->ops->mmap_ops;
vma->vm_private_data = node->driver_private;
vma->vm_private_data = obj->base.vma_node.driver_private;
return 0;
}
@ -1043,6 +1014,91 @@ int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
return 0;
}
/*
* This overcomes the limitation in drm_gem_mmap's assignment of a
* drm_gem_object as the vma->vm_private_data. Since we need to
* be able to resolve multiple mmap offsets which could be tied
* to a single gem object.
*/
int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct drm_vma_offset_node *node;
struct drm_file *priv = filp->private_data;
struct drm_device *dev = priv->minor->dev;
struct drm_i915_gem_object *obj = NULL;
struct i915_mmap_offset *mmo = NULL;
if (drm_dev_is_unplugged(dev))
return -ENODEV;
rcu_read_lock();
drm_vma_offset_lock_lookup(dev->vma_offset_manager);
node = drm_vma_offset_exact_lookup_locked(dev->vma_offset_manager,
vma->vm_pgoff,
vma_pages(vma));
if (node && drm_vma_node_is_allowed(node, priv)) {
/*
* Skip 0-refcnted objects as it is in the process of being
* destroyed and will be invalid when the vma manager lock
* is released.
*/
if (!node->driver_private) {
mmo = container_of(node, struct i915_mmap_offset, vma_node);
obj = i915_gem_object_get_rcu(mmo->obj);
GEM_BUG_ON(obj && obj->ops->mmap_ops);
} else {
obj = i915_gem_object_get_rcu
(container_of(node, struct drm_i915_gem_object,
base.vma_node));
GEM_BUG_ON(obj && !obj->ops->mmap_ops);
}
}
drm_vma_offset_unlock_lookup(dev->vma_offset_manager);
rcu_read_unlock();
if (!obj)
return node ? -EACCES : -EINVAL;
return i915_gem_object_mmap(obj, mmo, vma);
}
int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct vm_area_struct *vma)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct drm_device *dev = &i915->drm;
struct i915_mmap_offset *mmo = NULL;
enum i915_mmap_type mmap_type;
struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
if (drm_dev_is_unplugged(dev))
return -ENODEV;
/* handle ttm object */
if (obj->ops->mmap_ops) {
/*
* ttm fault handler, ttm_bo_vm_fault_reserved() uses fake offset
* to calculate page offset so set that up.
*/
vma->vm_pgoff += drm_vma_node_start(&obj->base.vma_node);
} else {
/* handle stolen and smem objects */
mmap_type = i915_ggtt_has_aperture(ggtt) ? I915_MMAP_TYPE_GTT : I915_MMAP_TYPE_WC;
mmo = mmap_offset_attach(obj, mmap_type, NULL);
if (!mmo)
return -ENODEV;
}
/*
* When we install vm_ops for mmap we are too late for
* the vm_ops->open() which increases the ref_count of
* this obj and then it gets decreased by the vm_ops->close().
* To balance this increase the obj ref_count here.
*/
obj = i915_gem_object_get(obj);
return i915_gem_object_mmap(obj, mmo, vma);
}
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
#include "selftests/i915_gem_mman.c"
#endif

View File

@ -29,5 +29,5 @@ void i915_gem_object_release_mmap_gtt(struct drm_i915_gem_object *obj);
void i915_gem_object_runtime_pm_release_mmap_offset(struct drm_i915_gem_object *obj);
void i915_gem_object_release_mmap_offset(struct drm_i915_gem_object *obj);
int i915_gem_fb_mmap(struct drm_i915_gem_object *obj, struct vm_area_struct *vma);
#endif

View File

@ -45,6 +45,33 @@ static struct kmem_cache *slab_objects;
static const struct drm_gem_object_funcs i915_gem_object_funcs;
unsigned int i915_gem_get_pat_index(struct drm_i915_private *i915,
enum i915_cache_level level)
{
if (drm_WARN_ON(&i915->drm, level >= I915_MAX_CACHE_LEVEL))
return 0;
return INTEL_INFO(i915)->cachelevel_to_pat[level];
}
bool i915_gem_object_has_cache_level(const struct drm_i915_gem_object *obj,
enum i915_cache_level lvl)
{
/*
* In case the pat_index is set by user space, this kernel mode
* driver should leave the coherency to be managed by user space,
* simply return true here.
*/
if (obj->pat_set_by_user)
return true;
/*
* Otherwise the pat_index should have been converted from cache_level
* so that the following comparison is valid.
*/
return obj->pat_index == i915_gem_get_pat_index(obj_to_i915(obj), lvl);
}
struct drm_i915_gem_object *i915_gem_object_alloc(void)
{
struct drm_i915_gem_object *obj;
@ -124,7 +151,7 @@ void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj,
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
obj->cache_level = cache_level;
obj->pat_index = i915_gem_get_pat_index(i915, cache_level);
if (cache_level != I915_CACHE_NONE)
obj->cache_coherent = (I915_BO_CACHE_COHERENT_FOR_READ |
@ -139,6 +166,37 @@ void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj,
!IS_DGFX(i915);
}
/**
* i915_gem_object_set_pat_index - set PAT index to be used in PTE encode
* @obj: #drm_i915_gem_object
* @pat_index: PAT index
*
* This is a clone of i915_gem_object_set_cache_coherency taking pat index
* instead of cache_level as its second argument.
*/
void i915_gem_object_set_pat_index(struct drm_i915_gem_object *obj,
unsigned int pat_index)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
if (obj->pat_index == pat_index)
return;
obj->pat_index = pat_index;
if (pat_index != i915_gem_get_pat_index(i915, I915_CACHE_NONE))
obj->cache_coherent = (I915_BO_CACHE_COHERENT_FOR_READ |
I915_BO_CACHE_COHERENT_FOR_WRITE);
else if (HAS_LLC(i915))
obj->cache_coherent = I915_BO_CACHE_COHERENT_FOR_READ;
else
obj->cache_coherent = 0;
obj->cache_dirty =
!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE) &&
!IS_DGFX(i915);
}
bool i915_gem_object_can_bypass_llc(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);

View File

@ -20,6 +20,8 @@
enum intel_region_id;
#define obj_to_i915(obj__) to_i915((obj__)->base.dev)
static inline bool i915_gem_object_size_2big(u64 size)
{
struct drm_i915_gem_object *obj;
@ -30,6 +32,10 @@ static inline bool i915_gem_object_size_2big(u64 size)
return false;
}
unsigned int i915_gem_get_pat_index(struct drm_i915_private *i915,
enum i915_cache_level level);
bool i915_gem_object_has_cache_level(const struct drm_i915_gem_object *obj,
enum i915_cache_level lvl);
void i915_gem_init__objects(struct drm_i915_private *i915);
void i915_objects_module_exit(void);
@ -80,7 +86,7 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj);
/**
* i915_gem_object_lookup_rcu - look up a temporary GEM object from its handle
* @filp: DRM file private date
* @file: DRM file private date
* @handle: userspace handle
*
* Returns:
@ -760,6 +766,8 @@ bool i915_gem_object_has_unknown_state(struct drm_i915_gem_object *obj);
void i915_gem_object_set_cache_coherency(struct drm_i915_gem_object *obj,
unsigned int cache_level);
void i915_gem_object_set_pat_index(struct drm_i915_gem_object *obj,
unsigned int pat_index);
bool i915_gem_object_can_bypass_llc(struct drm_i915_gem_object *obj);
void i915_gem_object_flush_if_display(struct drm_i915_gem_object *obj);
void i915_gem_object_flush_if_display_locked(struct drm_i915_gem_object *obj);

View File

@ -194,6 +194,13 @@ enum i915_cache_level {
* engine.
*/
I915_CACHE_WT,
/**
* @I915_MAX_CACHE_LEVEL:
*
* Mark the last entry in the enum. Used for defining cachelevel_to_pat
* array for cache_level to pat translation table.
*/
I915_MAX_CACHE_LEVEL,
};
enum i915_map_type {
@ -328,6 +335,12 @@ struct drm_i915_gem_object {
*/
#define I915_BO_ALLOC_GPU_ONLY BIT(6)
#define I915_BO_ALLOC_CCS_AUX BIT(7)
/*
* Object is allowed to retain its initial data and will not be cleared on first
* access if used along with I915_BO_ALLOC_USER. This is mainly to keep
* preallocated framebuffer data intact while transitioning it to i915drmfb.
*/
#define I915_BO_PREALLOC BIT(8)
#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | \
I915_BO_ALLOC_VOLATILE | \
I915_BO_ALLOC_CPU_CLEAR | \
@ -335,10 +348,11 @@ struct drm_i915_gem_object {
I915_BO_ALLOC_PM_VOLATILE | \
I915_BO_ALLOC_PM_EARLY | \
I915_BO_ALLOC_GPU_ONLY | \
I915_BO_ALLOC_CCS_AUX)
#define I915_BO_READONLY BIT(8)
#define I915_TILING_QUIRK_BIT 9 /* unknown swizzling; do not release! */
#define I915_BO_PROTECTED BIT(10)
I915_BO_ALLOC_CCS_AUX | \
I915_BO_PREALLOC)
#define I915_BO_READONLY BIT(9)
#define I915_TILING_QUIRK_BIT 10 /* unknown swizzling; do not release! */
#define I915_BO_PROTECTED BIT(11)
/**
* @mem_flags - Mutable placement-related flags
*
@ -350,15 +364,43 @@ struct drm_i915_gem_object {
#define I915_BO_FLAG_STRUCT_PAGE BIT(0) /* Object backed by struct pages */
#define I915_BO_FLAG_IOMEM BIT(1) /* Object backed by IO memory */
/**
* @cache_level: The desired GTT caching level.
* @pat_index: The desired PAT index.
*
* See enum i915_cache_level for possible values, along with what
* each does.
* See hardware specification for valid PAT indices for each platform.
* This field replaces the @cache_level that contains a value of enum
* i915_cache_level since PAT indices are being used by both userspace
* and kernel mode driver for caching policy control after GEN12.
* In the meantime platform specific tables are created to translate
* i915_cache_level into pat index, for more details check the macros
* defined i915/i915_pci.c, e.g. PVC_CACHELEVEL.
* For backward compatibility, this field contains values exactly match
* the entries of enum i915_cache_level for pre-GEN12 platforms (See
* LEGACY_CACHELEVEL), so that the PTE encode functions for these
* legacy platforms can stay the same.
*/
unsigned int cache_level:3;
unsigned int pat_index:6;
/**
* @pat_set_by_user: Indicate whether pat_index is set by user space
*
* This field is set to false by default, only set to true if the
* pat_index is set by user space. By design, user space is capable of
* managing caching behavior by setting pat_index, in which case this
* kernel mode driver should never touch the pat_index.
*/
unsigned int pat_set_by_user:1;
/**
* @cache_coherent:
*
* Note: with the change above which replaced @cache_level with pat_index,
* the use of @cache_coherent is limited to the objects created by kernel
* or by userspace without pat index specified.
* Check for @pat_set_by_user to find out if an object has pat index set
* by userspace. The ioctl's to change cache settings have also been
* disabled for the objects with pat index set by userspace. Please don't
* assume @cache_coherent having the flags set as describe here. A helper
* function i915_gem_object_has_cache_level() provides one way to bypass
* the use of this field.
*
* Track whether the pages are coherent with the GPU if reading or
* writing through the CPU caches. The largely depends on the
* @cache_level setting.
@ -432,6 +474,16 @@ struct drm_i915_gem_object {
/**
* @cache_dirty:
*
* Note: with the change above which replaced cache_level with pat_index,
* the use of @cache_dirty is limited to the objects created by kernel
* or by userspace without pat index specified.
* Check for @pat_set_by_user to find out if an object has pat index set
* by userspace. The ioctl's to change cache settings have also been
* disabled for the objects with pat_index set by userspace. Please don't
* assume @cache_dirty is set as describe here. Also see helper function
* i915_gem_object_has_cache_level() for possible ways to bypass the use
* of this field.
*
* Track if we are we dirty with writes through the CPU cache for this
* object. As a result reading directly from main memory might yield
* stale data.

View File

@ -469,7 +469,10 @@ enum i915_map_type i915_coherent_map_type(struct drm_i915_private *i915,
struct drm_i915_gem_object *obj,
bool always_coherent)
{
if (i915_gem_object_is_lmem(obj))
/*
* Wa_22016122933: always return I915_MAP_WC for MTL
*/
if (i915_gem_object_is_lmem(obj) || IS_METEORLAKE(i915))
return I915_MAP_WC;
if (HAS_LLC(i915) || always_coherent)
return I915_MAP_WB;

View File

@ -22,9 +22,7 @@ struct i915_gem_apply_to_region;
*/
struct i915_gem_apply_to_region_ops {
/**
* process_obj - Process the current object
* @apply: Embed this for private data.
* @obj: The current object.
* @process_obj: Process the current object
*
* Note that if this function is part of a ww transaction, and
* if returns -EDEADLK for one of the objects, it may be

View File

@ -601,7 +601,14 @@ static int shmem_object_init(struct intel_memory_region *mem,
obj->write_domain = I915_GEM_DOMAIN_CPU;
obj->read_domains = I915_GEM_DOMAIN_CPU;
if (HAS_LLC(i915))
/*
* MTL doesn't snoop CPU cache by default for GPU access (namely
* 1-way coherency). However some UMD's are currently depending on
* that. Make 1-way coherent the default setting for MTL. A follow
* up patch will extend the GEM_CREATE uAPI to allow UMD's specify
* caching mode at BO creation time
*/
if (HAS_LLC(i915) || (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 70)))
/* On some devices, we can have the GPU use the LLC (the CPU
* cache) for about a 10% performance improvement
* compared to uncached. Graphics requests other than

View File

@ -460,8 +460,6 @@ void i915_gem_shrinker_taints_mutex(struct drm_i915_private *i915,
fs_reclaim_release(GFP_KERNEL);
}
#define obj_to_i915(obj__) to_i915((obj__)->base.dev)
/**
* i915_gem_object_make_unshrinkable - Hide the object from the shrinker. By
* default all object types that support shrinking(see IS_SHRINKABLE), will also

View File

@ -535,6 +535,14 @@ static int i915_gem_init_stolen(struct intel_memory_region *mem)
/* Basic memrange allocator for stolen space. */
drm_mm_init(&i915->mm.stolen, 0, i915->dsm.usable_size);
/*
* Access to stolen lmem beyond certain size for MTL A0 stepping
* would crash the machine. Disable stolen lmem for userspace access
* by setting usable_size to zero.
*/
if (IS_METEORLAKE(i915) && INTEL_REVID(i915) == 0x0)
i915->dsm.usable_size = 0;
return 0;
}
@ -557,7 +565,9 @@ static void dbg_poison(struct i915_ggtt *ggtt,
ggtt->vm.insert_page(&ggtt->vm, addr,
ggtt->error_capture.start,
I915_CACHE_NONE, 0);
i915_gem_get_pat_index(ggtt->vm.i915,
I915_CACHE_NONE),
0);
mb();
s = io_mapping_map_wc(&ggtt->iomap,

View File

@ -42,8 +42,9 @@ static inline bool i915_ttm_is_ghost_object(struct ttm_buffer_object *bo)
/**
* i915_ttm_to_gem - Convert a struct ttm_buffer_object to an embedding
* struct drm_i915_gem_object.
* @bo: Pointer to the ttm buffer object
*
* Return: Pointer to the embedding struct ttm_buffer_object.
* Return: Pointer to the embedding struct drm_i915_gem_object.
*/
static inline struct drm_i915_gem_object *
i915_ttm_to_gem(struct ttm_buffer_object *bo)

View File

@ -214,7 +214,8 @@ static struct dma_fence *i915_ttm_accel_move(struct ttm_buffer_object *bo,
intel_engine_pm_get(to_gt(i915)->migrate.context->engine);
ret = intel_context_migrate_clear(to_gt(i915)->migrate.context, deps,
dst_st->sgl, dst_level,
dst_st->sgl,
i915_gem_get_pat_index(i915, dst_level),
i915_ttm_gtt_binds_lmem(dst_mem),
0, &rq);
} else {
@ -228,9 +229,10 @@ static struct dma_fence *i915_ttm_accel_move(struct ttm_buffer_object *bo,
intel_engine_pm_get(to_gt(i915)->migrate.context->engine);
ret = intel_context_migrate_copy(to_gt(i915)->migrate.context,
deps, src_rsgt->table.sgl,
src_level,
i915_gem_get_pat_index(i915, src_level),
i915_ttm_gtt_binds_lmem(bo->resource),
dst_st->sgl, dst_level,
dst_st->sgl,
i915_gem_get_pat_index(i915, dst_level),
i915_ttm_gtt_binds_lmem(dst_mem),
&rq);
@ -576,7 +578,7 @@ int i915_ttm_move(struct ttm_buffer_object *bo, bool evict,
struct dma_fence *migration_fence = NULL;
struct ttm_tt *ttm = bo->ttm;
struct i915_refct_sgt *dst_rsgt;
bool clear;
bool clear, prealloc_bo;
int ret;
if (GEM_WARN_ON(i915_ttm_is_ghost_object(bo))) {
@ -632,7 +634,8 @@ int i915_ttm_move(struct ttm_buffer_object *bo, bool evict,
return PTR_ERR(dst_rsgt);
clear = !i915_ttm_cpu_maps_iomem(bo->resource) && (!ttm || !ttm_tt_is_populated(ttm));
if (!(clear && ttm && !(ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC))) {
prealloc_bo = obj->flags & I915_BO_PREALLOC;
if (!(clear && ttm && !((ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC) && !prealloc_bo))) {
struct i915_deps deps;
i915_deps_init(&deps, GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN);

View File

@ -354,7 +354,7 @@ fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single)
obj->write_domain = I915_GEM_DOMAIN_CPU;
obj->read_domains = I915_GEM_DOMAIN_CPU;
obj->cache_level = I915_CACHE_NONE;
obj->pat_index = i915_gem_get_pat_index(i915, I915_CACHE_NONE);
return obj;
}
@ -695,8 +695,7 @@ out_put:
return err;
}
static void close_object_list(struct list_head *objects,
struct i915_ppgtt *ppgtt)
static void close_object_list(struct list_head *objects)
{
struct drm_i915_gem_object *obj, *on;
@ -710,17 +709,36 @@ static void close_object_list(struct list_head *objects,
}
}
static int igt_mock_ppgtt_huge_fill(void *arg)
static int igt_ppgtt_huge_fill(void *arg)
{
struct i915_ppgtt *ppgtt = arg;
struct drm_i915_private *i915 = ppgtt->vm.i915;
unsigned long max_pages = ppgtt->vm.total >> PAGE_SHIFT;
struct drm_i915_private *i915 = arg;
unsigned int supported = RUNTIME_INFO(i915)->page_sizes;
bool has_pte64 = GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50);
struct i915_address_space *vm;
struct i915_gem_context *ctx;
unsigned long max_pages;
unsigned long page_num;
struct file *file;
bool single = false;
LIST_HEAD(objects);
IGT_TIMEOUT(end_time);
int err = -ENODEV;
if (supported == I915_GTT_PAGE_SIZE_4K)
return 0;
file = mock_file(i915);
if (IS_ERR(file))
return PTR_ERR(file);
ctx = hugepage_ctx(i915, file);
if (IS_ERR(ctx)) {
err = PTR_ERR(ctx);
goto out;
}
vm = i915_gem_context_get_eb_vm(ctx);
max_pages = vm->total >> PAGE_SHIFT;
for_each_prime_number_from(page_num, 1, max_pages) {
struct drm_i915_gem_object *obj;
u64 size = page_num << PAGE_SHIFT;
@ -750,13 +768,14 @@ static int igt_mock_ppgtt_huge_fill(void *arg)
list_add(&obj->st_link, &objects);
vma = i915_vma_instance(obj, &ppgtt->vm, NULL);
vma = i915_vma_instance(obj, vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
break;
}
err = i915_vma_pin(vma, 0, 0, PIN_USER);
/* vma start must be aligned to BIT(21) to allow 2M PTEs */
err = i915_vma_pin(vma, 0, BIT(21), PIN_USER);
if (err)
break;
@ -784,12 +803,13 @@ static int igt_mock_ppgtt_huge_fill(void *arg)
GEM_BUG_ON(!expected_gtt);
GEM_BUG_ON(size);
if (expected_gtt & I915_GTT_PAGE_SIZE_4K)
if (!has_pte64 && (obj->base.size < I915_GTT_PAGE_SIZE_2M ||
expected_gtt & I915_GTT_PAGE_SIZE_2M))
expected_gtt &= ~I915_GTT_PAGE_SIZE_64K;
i915_vma_unpin(vma);
if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K) {
if (!has_pte64 && vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K) {
if (!IS_ALIGNED(vma->node.start,
I915_GTT_PAGE_SIZE_2M)) {
pr_err("node.start(%llx) not aligned to 2M\n",
@ -808,7 +828,7 @@ static int igt_mock_ppgtt_huge_fill(void *arg)
}
if (vma->resource->page_sizes_gtt != expected_gtt) {
pr_err("gtt=%u, expected=%u, size=%zd, single=%s\n",
pr_err("gtt=%#x, expected=%#x, size=0x%zx, single=%s\n",
vma->resource->page_sizes_gtt, expected_gtt,
obj->base.size, str_yes_no(!!single));
err = -EINVAL;
@ -823,19 +843,25 @@ static int igt_mock_ppgtt_huge_fill(void *arg)
single = !single;
}
close_object_list(&objects, ppgtt);
close_object_list(&objects);
if (err == -ENOMEM || err == -ENOSPC)
err = 0;
i915_vm_put(vm);
out:
fput(file);
return err;
}
static int igt_mock_ppgtt_64K(void *arg)
static int igt_ppgtt_64K(void *arg)
{
struct i915_ppgtt *ppgtt = arg;
struct drm_i915_private *i915 = ppgtt->vm.i915;
struct drm_i915_private *i915 = arg;
bool has_pte64 = GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50);
struct drm_i915_gem_object *obj;
struct i915_address_space *vm;
struct i915_gem_context *ctx;
struct file *file;
const struct object_info {
unsigned int size;
unsigned int gtt;
@ -907,16 +933,41 @@ static int igt_mock_ppgtt_64K(void *arg)
if (!HAS_PAGE_SIZES(i915, I915_GTT_PAGE_SIZE_64K))
return 0;
file = mock_file(i915);
if (IS_ERR(file))
return PTR_ERR(file);
ctx = hugepage_ctx(i915, file);
if (IS_ERR(ctx)) {
err = PTR_ERR(ctx);
goto out;
}
vm = i915_gem_context_get_eb_vm(ctx);
for (i = 0; i < ARRAY_SIZE(objects); ++i) {
unsigned int size = objects[i].size;
unsigned int expected_gtt = objects[i].gtt;
unsigned int offset = objects[i].offset;
unsigned int flags = PIN_USER;
/*
* For modern GTT models, the requirements for marking a page-table
* as 64K have been relaxed. Account for this.
*/
if (has_pte64) {
expected_gtt = 0;
if (size >= SZ_64K)
expected_gtt |= I915_GTT_PAGE_SIZE_64K;
if (size & (SZ_64K - 1))
expected_gtt |= I915_GTT_PAGE_SIZE_4K;
}
for (single = 0; single <= 1; single++) {
obj = fake_huge_pages_object(i915, size, !!single);
if (IS_ERR(obj))
return PTR_ERR(obj);
if (IS_ERR(obj)) {
err = PTR_ERR(obj);
goto out_vm;
}
err = i915_gem_object_pin_pages_unlocked(obj);
if (err)
@ -928,7 +979,7 @@ static int igt_mock_ppgtt_64K(void *arg)
*/
obj->mm.page_sizes.sg &= ~I915_GTT_PAGE_SIZE_2M;
vma = i915_vma_instance(obj, &ppgtt->vm, NULL);
vma = i915_vma_instance(obj, vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto out_object_unpin;
@ -945,7 +996,8 @@ static int igt_mock_ppgtt_64K(void *arg)
if (err)
goto out_vma_unpin;
if (!offset && vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K) {
if (!has_pte64 && !offset &&
vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K) {
if (!IS_ALIGNED(vma->node.start,
I915_GTT_PAGE_SIZE_2M)) {
pr_err("node.start(%llx) not aligned to 2M\n",
@ -964,9 +1016,10 @@ static int igt_mock_ppgtt_64K(void *arg)
}
if (vma->resource->page_sizes_gtt != expected_gtt) {
pr_err("gtt=%u, expected=%u, i=%d, single=%s\n",
pr_err("gtt=%#x, expected=%#x, i=%d, single=%s offset=%#x size=%#x\n",
vma->resource->page_sizes_gtt,
expected_gtt, i, str_yes_no(!!single));
expected_gtt, i, str_yes_no(!!single),
offset, size);
err = -EINVAL;
goto out_vma_unpin;
}
@ -982,7 +1035,7 @@ static int igt_mock_ppgtt_64K(void *arg)
}
}
return 0;
goto out_vm;
out_vma_unpin:
i915_vma_unpin(vma);
@ -992,7 +1045,10 @@ out_object_unpin:
i915_gem_object_unlock(obj);
out_object_put:
i915_gem_object_put(obj);
out_vm:
i915_vm_put(vm);
out:
fput(file);
return err;
}
@ -1910,8 +1966,6 @@ int i915_gem_huge_page_mock_selftests(void)
SUBTEST(igt_mock_exhaust_device_supported_pages),
SUBTEST(igt_mock_memory_region_huge_pages),
SUBTEST(igt_mock_ppgtt_misaligned_dma),
SUBTEST(igt_mock_ppgtt_huge_fill),
SUBTEST(igt_mock_ppgtt_64K),
};
struct drm_i915_private *dev_priv;
struct i915_ppgtt *ppgtt;
@ -1962,6 +2016,8 @@ int i915_gem_huge_page_live_selftests(struct drm_i915_private *i915)
SUBTEST(igt_ppgtt_sanity_check),
SUBTEST(igt_ppgtt_compact),
SUBTEST(igt_ppgtt_mixed),
SUBTEST(igt_ppgtt_huge_fill),
SUBTEST(igt_ppgtt_64K),
};
if (!HAS_PPGTT(i915)) {

View File

@ -66,7 +66,7 @@ static int live_nop_switch(void *arg)
ctx[n] = live_context(i915, file);
if (IS_ERR(ctx[n])) {
err = PTR_ERR(ctx[n]);
goto out_file;
goto out_ctx;
}
}
@ -82,7 +82,7 @@ static int live_nop_switch(void *arg)
this = igt_request_alloc(ctx[n], engine);
if (IS_ERR(this)) {
err = PTR_ERR(this);
goto out_file;
goto out_ctx;
}
if (rq) {
i915_request_await_dma_fence(this, &rq->fence);
@ -93,10 +93,10 @@ static int live_nop_switch(void *arg)
}
if (i915_request_wait(rq, 0, 10 * HZ) < 0) {
pr_err("Failed to populated %d contexts\n", nctx);
intel_gt_set_wedged(to_gt(i915));
intel_gt_set_wedged(engine->gt);
i915_request_put(rq);
err = -EIO;
goto out_file;
goto out_ctx;
}
i915_request_put(rq);
@ -107,7 +107,7 @@ static int live_nop_switch(void *arg)
err = igt_live_test_begin(&t, i915, __func__, engine->name);
if (err)
goto out_file;
goto out_ctx;
end_time = jiffies + i915_selftest.timeout_jiffies;
for_each_prime_number_from(prime, 2, 8192) {
@ -120,7 +120,7 @@ static int live_nop_switch(void *arg)
this = igt_request_alloc(ctx[n % nctx], engine);
if (IS_ERR(this)) {
err = PTR_ERR(this);
goto out_file;
goto out_ctx;
}
if (rq) { /* Force submission order */
@ -149,7 +149,7 @@ static int live_nop_switch(void *arg)
if (i915_request_wait(rq, 0, HZ / 5) < 0) {
pr_err("Switching between %ld contexts timed out\n",
prime);
intel_gt_set_wedged(to_gt(i915));
intel_gt_set_wedged(engine->gt);
i915_request_put(rq);
break;
}
@ -165,7 +165,7 @@ static int live_nop_switch(void *arg)
err = igt_live_test_end(&t);
if (err)
goto out_file;
goto out_ctx;
pr_info("Switch latencies on %s: 1 = %lluns, %lu = %lluns\n",
engine->name,
@ -173,6 +173,8 @@ static int live_nop_switch(void *arg)
prime - 1, div64_u64(ktime_to_ns(times[1]), prime - 1));
}
out_ctx:
kfree(ctx);
out_file:
fput(file);
return err;

View File

@ -219,7 +219,7 @@ static int __igt_lmem_pages_migrate(struct intel_gt *gt,
continue;
err = intel_migrate_clear(&gt->migrate, &ww, deps,
obj->mm.pages->sgl, obj->cache_level,
obj->mm.pages->sgl, obj->pat_index,
i915_gem_object_is_lmem(obj),
0xdeadbeaf, &rq);
if (rq) {

View File

@ -1222,7 +1222,7 @@ static int __igt_mmap_migrate(struct intel_memory_region **placements,
}
err = intel_context_migrate_clear(to_gt(i915)->migrate.context, NULL,
obj->mm.pages->sgl, obj->cache_level,
obj->mm.pages->sgl, obj->pat_index,
i915_gem_object_is_lmem(obj),
expand32(POISON_INUSE), &rq);
i915_gem_object_unpin_pages(obj);

View File

@ -109,7 +109,7 @@ static void gen6_ppgtt_clear_range(struct i915_address_space *vm,
static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags)
{
struct i915_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
@ -117,7 +117,7 @@ static void gen6_ppgtt_insert_entries(struct i915_address_space *vm,
unsigned int first_entry = vma_res->start / I915_GTT_PAGE_SIZE;
unsigned int act_pt = first_entry / GEN6_PTES;
unsigned int act_pte = first_entry % GEN6_PTES;
const u32 pte_encode = vm->pte_encode(0, cache_level, flags);
const u32 pte_encode = vm->pte_encode(0, pat_index, flags);
struct sgt_dma iter = sgt_dma(vma_res);
gen6_pte_t *vaddr;
@ -227,7 +227,9 @@ static int gen6_ppgtt_init_scratch(struct gen6_ppgtt *ppgtt)
vm->scratch[0]->encode =
vm->pte_encode(px_dma(vm->scratch[0]),
I915_CACHE_NONE, PTE_READ_ONLY);
i915_gem_get_pat_index(vm->i915,
I915_CACHE_NONE),
PTE_READ_ONLY);
vm->scratch[1] = vm->alloc_pt_dma(vm, I915_GTT_PAGE_SIZE_4K);
if (IS_ERR(vm->scratch[1])) {
@ -278,7 +280,7 @@ static void gen6_ppgtt_cleanup(struct i915_address_space *vm)
static void pd_vma_bind(struct i915_address_space *vm,
struct i915_vm_pt_stash *stash,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 unused)
{
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);

View File

@ -29,7 +29,7 @@ static u64 gen8_pde_encode(const dma_addr_t addr,
}
static u64 gen8_pte_encode(dma_addr_t addr,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
gen8_pte_t pte = addr | GEN8_PAGE_PRESENT | GEN8_PAGE_RW;
@ -40,7 +40,12 @@ static u64 gen8_pte_encode(dma_addr_t addr,
if (flags & PTE_LM)
pte |= GEN12_PPGTT_PTE_LM;
switch (level) {
/*
* For pre-gen12 platforms pat_index is the same as enum
* i915_cache_level, so the switch-case here is still valid.
* See translation table defined by LEGACY_CACHELEVEL.
*/
switch (pat_index) {
case I915_CACHE_NONE:
pte |= PPAT_UNCACHED;
break;
@ -55,6 +60,33 @@ static u64 gen8_pte_encode(dma_addr_t addr,
return pte;
}
static u64 gen12_pte_encode(dma_addr_t addr,
unsigned int pat_index,
u32 flags)
{
gen8_pte_t pte = addr | GEN8_PAGE_PRESENT | GEN8_PAGE_RW;
if (unlikely(flags & PTE_READ_ONLY))
pte &= ~GEN8_PAGE_RW;
if (flags & PTE_LM)
pte |= GEN12_PPGTT_PTE_LM;
if (pat_index & BIT(0))
pte |= GEN12_PPGTT_PTE_PAT0;
if (pat_index & BIT(1))
pte |= GEN12_PPGTT_PTE_PAT1;
if (pat_index & BIT(2))
pte |= GEN12_PPGTT_PTE_PAT2;
if (pat_index & BIT(3))
pte |= MTL_PPGTT_PTE_PAT3;
return pte;
}
static void gen8_ppgtt_notify_vgt(struct i915_ppgtt *ppgtt, bool create)
{
struct drm_i915_private *i915 = ppgtt->vm.i915;
@ -423,11 +455,11 @@ gen8_ppgtt_insert_pte(struct i915_ppgtt *ppgtt,
struct i915_page_directory *pdp,
struct sgt_dma *iter,
u64 idx,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags)
{
struct i915_page_directory *pd;
const gen8_pte_t pte_encode = gen8_pte_encode(0, cache_level, flags);
const gen8_pte_t pte_encode = ppgtt->vm.pte_encode(0, pat_index, flags);
gen8_pte_t *vaddr;
pd = i915_pd_entry(pdp, gen8_pd_index(idx, 2));
@ -470,10 +502,10 @@ static void
xehpsdv_ppgtt_insert_huge(struct i915_address_space *vm,
struct i915_vma_resource *vma_res,
struct sgt_dma *iter,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags)
{
const gen8_pte_t pte_encode = vm->pte_encode(0, cache_level, flags);
const gen8_pte_t pte_encode = vm->pte_encode(0, pat_index, flags);
unsigned int rem = sg_dma_len(iter->sg);
u64 start = vma_res->start;
u64 end = start + vma_res->vma_size;
@ -570,6 +602,7 @@ xehpsdv_ppgtt_insert_huge(struct i915_address_space *vm,
}
} while (rem >= page_size && index < max);
drm_clflush_virt_range(vaddr, PAGE_SIZE);
vma_res->page_sizes_gtt |= page_size;
} while (iter->sg && sg_dma_len(iter->sg));
}
@ -577,10 +610,10 @@ xehpsdv_ppgtt_insert_huge(struct i915_address_space *vm,
static void gen8_ppgtt_insert_huge(struct i915_address_space *vm,
struct i915_vma_resource *vma_res,
struct sgt_dma *iter,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags)
{
const gen8_pte_t pte_encode = gen8_pte_encode(0, cache_level, flags);
const gen8_pte_t pte_encode = vm->pte_encode(0, pat_index, flags);
unsigned int rem = sg_dma_len(iter->sg);
u64 start = vma_res->start;
@ -700,17 +733,17 @@ static void gen8_ppgtt_insert_huge(struct i915_address_space *vm,
static void gen8_ppgtt_insert(struct i915_address_space *vm,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags)
{
struct i915_ppgtt * const ppgtt = i915_vm_to_ppgtt(vm);
struct sgt_dma iter = sgt_dma(vma_res);
if (vma_res->bi.page_sizes.sg > I915_GTT_PAGE_SIZE) {
if (HAS_64K_PAGES(vm->i915))
xehpsdv_ppgtt_insert_huge(vm, vma_res, &iter, cache_level, flags);
if (GRAPHICS_VER_FULL(vm->i915) >= IP_VER(12, 50))
xehpsdv_ppgtt_insert_huge(vm, vma_res, &iter, pat_index, flags);
else
gen8_ppgtt_insert_huge(vm, vma_res, &iter, cache_level, flags);
gen8_ppgtt_insert_huge(vm, vma_res, &iter, pat_index, flags);
} else {
u64 idx = vma_res->start >> GEN8_PTE_SHIFT;
@ -719,7 +752,7 @@ static void gen8_ppgtt_insert(struct i915_address_space *vm,
gen8_pdp_for_page_index(vm, idx);
idx = gen8_ppgtt_insert_pte(ppgtt, pdp, &iter, idx,
cache_level, flags);
pat_index, flags);
} while (idx);
vma_res->page_sizes_gtt = I915_GTT_PAGE_SIZE;
@ -729,7 +762,7 @@ static void gen8_ppgtt_insert(struct i915_address_space *vm,
static void gen8_ppgtt_insert_entry(struct i915_address_space *vm,
dma_addr_t addr,
u64 offset,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
u64 idx = offset >> GEN8_PTE_SHIFT;
@ -743,14 +776,14 @@ static void gen8_ppgtt_insert_entry(struct i915_address_space *vm,
GEM_BUG_ON(pt->is_compact);
vaddr = px_vaddr(pt);
vaddr[gen8_pd_index(idx, 0)] = gen8_pte_encode(addr, level, flags);
vaddr[gen8_pd_index(idx, 0)] = vm->pte_encode(addr, pat_index, flags);
drm_clflush_virt_range(&vaddr[gen8_pd_index(idx, 0)], sizeof(*vaddr));
}
static void __xehpsdv_ppgtt_insert_entry_lm(struct i915_address_space *vm,
dma_addr_t addr,
u64 offset,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
u64 idx = offset >> GEN8_PTE_SHIFT;
@ -773,20 +806,20 @@ static void __xehpsdv_ppgtt_insert_entry_lm(struct i915_address_space *vm,
}
vaddr = px_vaddr(pt);
vaddr[gen8_pd_index(idx, 0) / 16] = gen8_pte_encode(addr, level, flags);
vaddr[gen8_pd_index(idx, 0) / 16] = vm->pte_encode(addr, pat_index, flags);
}
static void xehpsdv_ppgtt_insert_entry(struct i915_address_space *vm,
dma_addr_t addr,
u64 offset,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
if (flags & PTE_LM)
return __xehpsdv_ppgtt_insert_entry_lm(vm, addr, offset,
level, flags);
pat_index, flags);
return gen8_ppgtt_insert_entry(vm, addr, offset, level, flags);
return gen8_ppgtt_insert_entry(vm, addr, offset, pat_index, flags);
}
static int gen8_init_scratch(struct i915_address_space *vm)
@ -820,8 +853,10 @@ static int gen8_init_scratch(struct i915_address_space *vm)
pte_flags |= PTE_LM;
vm->scratch[0]->encode =
gen8_pte_encode(px_dma(vm->scratch[0]),
I915_CACHE_NONE, pte_flags);
vm->pte_encode(px_dma(vm->scratch[0]),
i915_gem_get_pat_index(vm->i915,
I915_CACHE_NONE),
pte_flags);
for (i = 1; i <= vm->top; i++) {
struct drm_i915_gem_object *obj;
@ -963,7 +998,10 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt,
*/
ppgtt->vm.alloc_scratch_dma = alloc_pt_dma;
ppgtt->vm.pte_encode = gen8_pte_encode;
if (GRAPHICS_VER(gt->i915) >= 12)
ppgtt->vm.pte_encode = gen12_pte_encode;
else
ppgtt->vm.pte_encode = gen8_pte_encode;
ppgtt->vm.bind_async_flags = I915_VMA_LOCAL_BIND;
ppgtt->vm.insert_entries = gen8_ppgtt_insert;

View File

@ -10,13 +10,12 @@
struct i915_address_space;
struct intel_gt;
enum i915_cache_level;
struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt,
unsigned long lmem_pt_obj_flags);
u64 gen8_ggtt_pte_encode(dma_addr_t addr,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags);
#endif

View File

@ -578,10 +578,13 @@ void intel_context_bind_parent_child(struct intel_context *parent,
child->parallel.parent = parent;
}
u64 intel_context_get_total_runtime_ns(const struct intel_context *ce)
u64 intel_context_get_total_runtime_ns(struct intel_context *ce)
{
u64 total, active;
if (ce->ops->update_stats)
ce->ops->update_stats(ce);
total = ce->stats.runtime.total;
if (ce->ops->flags & COPS_RUNTIME_CYCLES)
total *= ce->engine->gt->clock_period_ns;

View File

@ -97,7 +97,7 @@ void intel_context_bind_parent_child(struct intel_context *parent,
/**
* intel_context_lock_pinned - Stablises the 'pinned' status of the HW context
* @ce - the context
* @ce: the context
*
* Acquire a lock on the pinned status of the HW context, such that the context
* can neither be bound to the GPU or unbound whilst the lock is held, i.e.
@ -111,7 +111,7 @@ static inline int intel_context_lock_pinned(struct intel_context *ce)
/**
* intel_context_is_pinned - Reports the 'pinned' status
* @ce - the context
* @ce: the context
*
* While in use by the GPU, the context, along with its ring and page
* tables is pinned into memory and the GTT.
@ -133,7 +133,7 @@ static inline void intel_context_cancel_request(struct intel_context *ce,
/**
* intel_context_unlock_pinned - Releases the earlier locking of 'pinned' status
* @ce - the context
* @ce: the context
*
* Releases the lock earlier acquired by intel_context_unlock_pinned().
*/
@ -375,7 +375,7 @@ intel_context_clear_nopreempt(struct intel_context *ce)
clear_bit(CONTEXT_NOPREEMPT, &ce->flags);
}
u64 intel_context_get_total_runtime_ns(const struct intel_context *ce);
u64 intel_context_get_total_runtime_ns(struct intel_context *ce);
u64 intel_context_get_avg_runtime_ns(struct intel_context *ce);
static inline u64 intel_context_clock(void)

View File

@ -58,6 +58,8 @@ struct intel_context_ops {
void (*sched_disable)(struct intel_context *ce);
void (*update_stats)(struct intel_context *ce);
void (*reset)(struct intel_context *ce);
void (*destroy)(struct kref *kref);

View File

@ -1515,7 +1515,7 @@ int intel_engines_init(struct intel_gt *gt)
}
/**
* intel_engines_cleanup_common - cleans up the engine state created by
* intel_engine_cleanup_common - cleans up the engine state created by
* the common initiailizers.
* @engine: Engine to cleanup.
*

View File

@ -289,6 +289,7 @@ struct intel_engine_execlists {
*/
u8 csb_head;
/* private: selftest */
I915_SELFTEST_DECLARE(struct st_preempt_hang preempt_hang;)
};

View File

@ -117,7 +117,7 @@ static void set_scheduler_caps(struct drm_i915_private *i915)
disabled |= (I915_SCHEDULER_CAP_ENABLED |
I915_SCHEDULER_CAP_PRIORITY);
if (intel_uc_uses_guc_submission(&to_gt(i915)->uc))
if (intel_uc_uses_guc_submission(&engine->gt->uc))
enabled |= I915_SCHEDULER_CAP_STATIC_PRIORITY_MAP;
for (i = 0; i < ARRAY_SIZE(map); i++) {

View File

@ -220,8 +220,28 @@ static void guc_ggtt_invalidate(struct i915_ggtt *ggtt)
}
}
static u64 mtl_ggtt_pte_encode(dma_addr_t addr,
unsigned int pat_index,
u32 flags)
{
gen8_pte_t pte = addr | GEN8_PAGE_PRESENT;
WARN_ON_ONCE(addr & ~GEN12_GGTT_PTE_ADDR_MASK);
if (flags & PTE_LM)
pte |= GEN12_GGTT_PTE_LM;
if (pat_index & BIT(0))
pte |= MTL_GGTT_PTE_PAT0;
if (pat_index & BIT(1))
pte |= MTL_GGTT_PTE_PAT1;
return pte;
}
u64 gen8_ggtt_pte_encode(dma_addr_t addr,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
gen8_pte_t pte = addr | GEN8_PAGE_PRESENT;
@ -240,25 +260,25 @@ static void gen8_set_pte(void __iomem *addr, gen8_pte_t pte)
static void gen8_ggtt_insert_page(struct i915_address_space *vm,
dma_addr_t addr,
u64 offset,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
gen8_pte_t __iomem *pte =
(gen8_pte_t __iomem *)ggtt->gsm + offset / I915_GTT_PAGE_SIZE;
gen8_set_pte(pte, gen8_ggtt_pte_encode(addr, level, flags));
gen8_set_pte(pte, ggtt->vm.pte_encode(addr, pat_index, flags));
ggtt->invalidate(ggtt);
}
static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
struct i915_vma_resource *vma_res,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
const gen8_pte_t pte_encode = gen8_ggtt_pte_encode(0, level, flags);
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
const gen8_pte_t pte_encode = ggtt->vm.pte_encode(0, pat_index, flags);
gen8_pte_t __iomem *gte;
gen8_pte_t __iomem *end;
struct sgt_iter iter;
@ -315,14 +335,14 @@ static void gen8_ggtt_clear_range(struct i915_address_space *vm,
static void gen6_ggtt_insert_page(struct i915_address_space *vm,
dma_addr_t addr,
u64 offset,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
gen6_pte_t __iomem *pte =
(gen6_pte_t __iomem *)ggtt->gsm + offset / I915_GTT_PAGE_SIZE;
iowrite32(vm->pte_encode(addr, level, flags), pte);
iowrite32(vm->pte_encode(addr, pat_index, flags), pte);
ggtt->invalidate(ggtt);
}
@ -335,7 +355,7 @@ static void gen6_ggtt_insert_page(struct i915_address_space *vm,
*/
static void gen6_ggtt_insert_entries(struct i915_address_space *vm,
struct i915_vma_resource *vma_res,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
@ -352,7 +372,7 @@ static void gen6_ggtt_insert_entries(struct i915_address_space *vm,
iowrite32(vm->scratch[0]->encode, gte++);
end += (vma_res->node_size + vma_res->guard) / I915_GTT_PAGE_SIZE;
for_each_sgt_daddr(addr, iter, vma_res->bi.pages)
iowrite32(vm->pte_encode(addr, level, flags), gte++);
iowrite32(vm->pte_encode(addr, pat_index, flags), gte++);
GEM_BUG_ON(gte > end);
/* Fill the allocated but "unused" space beyond the end of the buffer */
@ -387,14 +407,15 @@ struct insert_page {
struct i915_address_space *vm;
dma_addr_t addr;
u64 offset;
enum i915_cache_level level;
unsigned int pat_index;
};
static int bxt_vtd_ggtt_insert_page__cb(void *_arg)
{
struct insert_page *arg = _arg;
gen8_ggtt_insert_page(arg->vm, arg->addr, arg->offset, arg->level, 0);
gen8_ggtt_insert_page(arg->vm, arg->addr, arg->offset,
arg->pat_index, 0);
bxt_vtd_ggtt_wa(arg->vm);
return 0;
@ -403,10 +424,10 @@ static int bxt_vtd_ggtt_insert_page__cb(void *_arg)
static void bxt_vtd_ggtt_insert_page__BKL(struct i915_address_space *vm,
dma_addr_t addr,
u64 offset,
enum i915_cache_level level,
unsigned int pat_index,
u32 unused)
{
struct insert_page arg = { vm, addr, offset, level };
struct insert_page arg = { vm, addr, offset, pat_index };
stop_machine(bxt_vtd_ggtt_insert_page__cb, &arg, NULL);
}
@ -414,7 +435,7 @@ static void bxt_vtd_ggtt_insert_page__BKL(struct i915_address_space *vm,
struct insert_entries {
struct i915_address_space *vm;
struct i915_vma_resource *vma_res;
enum i915_cache_level level;
unsigned int pat_index;
u32 flags;
};
@ -422,7 +443,8 @@ static int bxt_vtd_ggtt_insert_entries__cb(void *_arg)
{
struct insert_entries *arg = _arg;
gen8_ggtt_insert_entries(arg->vm, arg->vma_res, arg->level, arg->flags);
gen8_ggtt_insert_entries(arg->vm, arg->vma_res,
arg->pat_index, arg->flags);
bxt_vtd_ggtt_wa(arg->vm);
return 0;
@ -430,10 +452,10 @@ static int bxt_vtd_ggtt_insert_entries__cb(void *_arg)
static void bxt_vtd_ggtt_insert_entries__BKL(struct i915_address_space *vm,
struct i915_vma_resource *vma_res,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags)
{
struct insert_entries arg = { vm, vma_res, level, flags };
struct insert_entries arg = { vm, vma_res, pat_index, flags };
stop_machine(bxt_vtd_ggtt_insert_entries__cb, &arg, NULL);
}
@ -462,7 +484,7 @@ static void gen6_ggtt_clear_range(struct i915_address_space *vm,
void intel_ggtt_bind_vma(struct i915_address_space *vm,
struct i915_vm_pt_stash *stash,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags)
{
u32 pte_flags;
@ -479,7 +501,7 @@ void intel_ggtt_bind_vma(struct i915_address_space *vm,
if (vma_res->bi.lmem)
pte_flags |= PTE_LM;
vm->insert_entries(vm, vma_res, cache_level, pte_flags);
vm->insert_entries(vm, vma_res, pat_index, pte_flags);
vma_res->page_sizes_gtt = I915_GTT_PAGE_SIZE;
}
@ -628,7 +650,7 @@ err:
static void aliasing_gtt_bind_vma(struct i915_address_space *vm,
struct i915_vm_pt_stash *stash,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags)
{
u32 pte_flags;
@ -640,10 +662,10 @@ static void aliasing_gtt_bind_vma(struct i915_address_space *vm,
if (flags & I915_VMA_LOCAL_BIND)
ppgtt_bind_vma(&i915_vm_to_ggtt(vm)->alias->vm,
stash, vma_res, cache_level, flags);
stash, vma_res, pat_index, flags);
if (flags & I915_VMA_GLOBAL_BIND)
vm->insert_entries(vm, vma_res, cache_level, pte_flags);
vm->insert_entries(vm, vma_res, pat_index, pte_flags);
vma_res->bound_flags |= flags;
}
@ -900,7 +922,9 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 size)
ggtt->vm.scratch[0]->encode =
ggtt->vm.pte_encode(px_dma(ggtt->vm.scratch[0]),
I915_CACHE_NONE, pte_flags);
i915_gem_get_pat_index(i915,
I915_CACHE_NONE),
pte_flags);
return 0;
}
@ -981,11 +1005,19 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
ggtt->vm.vma_ops.bind_vma = intel_ggtt_bind_vma;
ggtt->vm.vma_ops.unbind_vma = intel_ggtt_unbind_vma;
ggtt->vm.pte_encode = gen8_ggtt_pte_encode;
if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 70))
ggtt->vm.pte_encode = mtl_ggtt_pte_encode;
else
ggtt->vm.pte_encode = gen8_ggtt_pte_encode;
return ggtt_probe_common(ggtt, size);
}
/*
* For pre-gen8 platforms pat_index is the same as enum i915_cache_level,
* so these PTE encode functions are left with using cache_level.
* See translation table LEGACY_CACHELEVEL.
*/
static u64 snb_pte_encode(dma_addr_t addr,
enum i915_cache_level level,
u32 flags)
@ -1266,7 +1298,9 @@ bool i915_ggtt_resume_vm(struct i915_address_space *vm)
*/
vma->resource->bound_flags = 0;
vma->ops->bind_vma(vm, NULL, vma->resource,
obj ? obj->cache_level : 0,
obj ? obj->pat_index :
i915_gem_get_pat_index(vm->i915,
I915_CACHE_NONE),
was_bound);
if (obj) { /* only used during resume => exclusive access */

View File

@ -15,6 +15,7 @@
#include "intel_uncore.h"
#include "intel_rps.h"
#include "pxp/intel_pxp_irq.h"
#include "uc/intel_gsc_proxy.h"
static void guc_irq_handler(struct intel_guc *guc, u16 iir)
{
@ -81,6 +82,9 @@ gen11_other_irq_handler(struct intel_gt *gt, const u8 instance,
if (instance == OTHER_GSC_INSTANCE)
return intel_gsc_irq_handler(gt, iir);
if (instance == OTHER_GSC_HECI_2_INSTANCE)
return intel_gsc_proxy_irq_handler(&gt->uc.gsc, iir);
WARN_ONCE(1, "unhandled other interrupt instance=0x%x, iir=0x%x\n",
instance, iir);
}
@ -100,7 +104,10 @@ static struct intel_gt *pick_gt(struct intel_gt *gt, u8 class, u8 instance)
case VIDEO_ENHANCEMENT_CLASS:
return media_gt;
case OTHER_CLASS:
if (instance == OTHER_GSC_INSTANCE && HAS_ENGINE(media_gt, GSC0))
if (instance == OTHER_GSC_HECI_2_INSTANCE)
return media_gt;
if ((instance == OTHER_GSC_INSTANCE || instance == OTHER_KCR_INSTANCE) &&
HAS_ENGINE(media_gt, GSC0))
return media_gt;
fallthrough;
default:
@ -256,6 +263,7 @@ void gen11_gt_irq_postinstall(struct intel_gt *gt)
u32 irqs = GT_RENDER_USER_INTERRUPT;
u32 guc_mask = intel_uc_wants_guc(&gt->uc) ? GUC_INTR_GUC2HOST : 0;
u32 gsc_mask = 0;
u32 heci_mask = 0;
u32 dmask;
u32 smask;
@ -267,10 +275,16 @@ void gen11_gt_irq_postinstall(struct intel_gt *gt)
dmask = irqs << 16 | irqs;
smask = irqs << 16;
if (HAS_ENGINE(gt, GSC0))
if (HAS_ENGINE(gt, GSC0)) {
/*
* the heci2 interrupt is enabled via the same register as the
* GSC interrupt, but it has its own mask register.
*/
gsc_mask = irqs;
else if (HAS_HECI_GSC(gt->i915))
heci_mask = GSC_IRQ_INTF(1); /* HECI2 IRQ for SW Proxy*/
} else if (HAS_HECI_GSC(gt->i915)) {
gsc_mask = GSC_IRQ_INTF(0) | GSC_IRQ_INTF(1);
}
BUILD_BUG_ON(irqs & 0xffff0000);
@ -280,7 +294,7 @@ void gen11_gt_irq_postinstall(struct intel_gt *gt)
if (CCS_MASK(gt))
intel_uncore_write(uncore, GEN12_CCS_RSVD_INTR_ENABLE, smask);
if (gsc_mask)
intel_uncore_write(uncore, GEN11_GUNIT_CSME_INTR_ENABLE, gsc_mask);
intel_uncore_write(uncore, GEN11_GUNIT_CSME_INTR_ENABLE, gsc_mask | heci_mask);
/* Unmask irqs on RCS, BCS, VCS and VECS engines. */
intel_uncore_write(uncore, GEN11_RCS0_RSVD_INTR_MASK, ~smask);
@ -308,6 +322,9 @@ void gen11_gt_irq_postinstall(struct intel_gt *gt)
intel_uncore_write(uncore, GEN12_CCS2_CCS3_INTR_MASK, ~dmask);
if (gsc_mask)
intel_uncore_write(uncore, GEN11_GUNIT_CSME_INTR_MASK, ~gsc_mask);
if (heci_mask)
intel_uncore_write(uncore, GEN12_HECI2_RSVD_INTR_MASK,
~REG_FIELD_PREP(ENGINE1_MASK, heci_mask));
if (guc_mask) {
/* the enable bit is common for both GTs but the masks are separate */

View File

@ -87,7 +87,7 @@ static int __gt_unpark(struct intel_wakeref *wf)
intel_rc6_unpark(&gt->rc6);
intel_rps_unpark(&gt->rps);
i915_pmu_gt_unparked(i915);
i915_pmu_gt_unparked(gt);
intel_guc_busyness_unpark(gt);
intel_gt_unpark_requests(gt);
@ -109,7 +109,7 @@ static int __gt_park(struct intel_wakeref *wf)
intel_guc_busyness_park(gt);
i915_vma_parked(gt);
i915_pmu_gt_parked(i915);
i915_pmu_gt_parked(gt);
intel_rps_park(&gt->rps);
intel_rc6_park(&gt->rc6);

View File

@ -539,7 +539,10 @@ static bool rps_eval(void *data)
{
struct intel_gt *gt = data;
return HAS_RPS(gt->i915);
if (intel_guc_slpc_is_used(&gt->uc.guc))
return false;
else
return HAS_RPS(gt->i915);
}
DEFINE_INTEL_GT_DEBUGFS_ATTRIBUTE(rps_boost);

View File

@ -356,7 +356,11 @@
#define GEN7_TLB_RD_ADDR _MMIO(0x4700)
#define GEN12_PAT_INDEX(index) _MMIO(0x4800 + (index) * 4)
#define XEHP_PAT_INDEX(index) MCR_REG(0x4800 + (index) * 4)
#define _PAT_INDEX(index) _PICK_EVEN_2RANGES(index, 8, \
0x4800, 0x4804, \
0x4848, 0x484c)
#define XEHP_PAT_INDEX(index) MCR_REG(_PAT_INDEX(index))
#define XELPMP_PAT_INDEX(index) _MMIO(_PAT_INDEX(index))
#define XEHP_TILE0_ADDR_RANGE MCR_REG(0x4900)
#define XEHP_TILE_LMEM_RANGE_SHIFT 8
@ -525,6 +529,11 @@
#define GEN8_RC6_CTX_INFO _MMIO(0x8504)
#define GEN12_SQCNT1 _MMIO(0x8718)
#define GEN12_SQCNT1_PMON_ENABLE REG_BIT(30)
#define GEN12_SQCNT1_OABPC REG_BIT(29)
#define GEN12_STRICT_RAR_ENABLE REG_BIT(23)
#define XEHP_SQCM MCR_REG(0x8724)
#define EN_32B_ACCESS REG_BIT(30)
@ -1587,6 +1596,7 @@
#define GEN11_GT_INTR_DW(x) _MMIO(0x190018 + ((x) * 4))
#define GEN11_CSME (31)
#define GEN12_HECI_2 (30)
#define GEN11_GUNIT (28)
#define GEN11_GUC (25)
#define MTL_MGUC (24)
@ -1628,6 +1638,7 @@
/* irq instances for OTHER_CLASS */
#define OTHER_GUC_INSTANCE 0
#define OTHER_GTPM_INSTANCE 1
#define OTHER_GSC_HECI_2_INSTANCE 3
#define OTHER_KCR_INSTANCE 4
#define OTHER_GSC_INSTANCE 6
#define OTHER_MEDIA_GUC_INSTANCE 16
@ -1643,6 +1654,7 @@
#define GEN12_VCS6_VCS7_INTR_MASK _MMIO(0x1900b4)
#define GEN11_VECS0_VECS1_INTR_MASK _MMIO(0x1900d0)
#define GEN12_VECS2_VECS3_INTR_MASK _MMIO(0x1900d4)
#define GEN12_HECI2_RSVD_INTR_MASK _MMIO(0x1900e4)
#define GEN11_GUC_SG_INTR_MASK _MMIO(0x1900e8)
#define MTL_GUC_MGUC_INTR_MASK _MMIO(0x1900e8) /* MTL+ */
#define GEN11_GPM_WGBOXPERF_INTR_MASK _MMIO(0x1900ec)

View File

@ -451,6 +451,33 @@ static ssize_t punit_req_freq_mhz_show(struct kobject *kobj,
return sysfs_emit(buff, "%u\n", preq);
}
static ssize_t slpc_ignore_eff_freq_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buff)
{
struct intel_gt *gt = intel_gt_sysfs_get_drvdata(kobj, attr->attr.name);
struct intel_guc_slpc *slpc = &gt->uc.guc.slpc;
return sysfs_emit(buff, "%u\n", slpc->ignore_eff_freq);
}
static ssize_t slpc_ignore_eff_freq_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buff, size_t count)
{
struct intel_gt *gt = intel_gt_sysfs_get_drvdata(kobj, attr->attr.name);
struct intel_guc_slpc *slpc = &gt->uc.guc.slpc;
int err;
u32 val;
err = kstrtou32(buff, 0, &val);
if (err)
return err;
err = intel_guc_slpc_set_ignore_eff_freq(slpc, val);
return err ?: count;
}
struct intel_gt_bool_throttle_attr {
struct attribute attr;
ssize_t (*show)(struct kobject *kobj, struct kobj_attribute *attr,
@ -663,6 +690,8 @@ static struct kobj_attribute attr_media_freq_factor_scale =
INTEL_GT_ATTR_RO(media_RP0_freq_mhz);
INTEL_GT_ATTR_RO(media_RPn_freq_mhz);
INTEL_GT_ATTR_RW(slpc_ignore_eff_freq);
static const struct attribute *media_perf_power_attrs[] = {
&attr_media_freq_factor.attr,
&attr_media_freq_factor_scale.attr,
@ -744,6 +773,12 @@ void intel_gt_sysfs_pm_init(struct intel_gt *gt, struct kobject *kobj)
if (ret)
gt_warn(gt, "failed to create punit_req_freq_mhz sysfs (%pe)", ERR_PTR(ret));
if (intel_uc_uses_guc_slpc(&gt->uc)) {
ret = sysfs_create_file(kobj, &attr_slpc_ignore_eff_freq.attr);
if (ret)
gt_warn(gt, "failed to create ignore_eff_freq sysfs (%pe)", ERR_PTR(ret));
}
if (i915_mmio_reg_valid(intel_gt_perf_limit_reasons_reg(gt))) {
ret = sysfs_create_files(kobj, throttle_reason_attrs);
if (ret)

View File

@ -468,6 +468,44 @@ void gtt_write_workarounds(struct intel_gt *gt)
}
}
static void xelpmp_setup_private_ppat(struct intel_uncore *uncore)
{
intel_uncore_write(uncore, XELPMP_PAT_INDEX(0),
MTL_PPAT_L4_0_WB);
intel_uncore_write(uncore, XELPMP_PAT_INDEX(1),
MTL_PPAT_L4_1_WT);
intel_uncore_write(uncore, XELPMP_PAT_INDEX(2),
MTL_PPAT_L4_3_UC);
intel_uncore_write(uncore, XELPMP_PAT_INDEX(3),
MTL_PPAT_L4_0_WB | MTL_2_COH_1W);
intel_uncore_write(uncore, XELPMP_PAT_INDEX(4),
MTL_PPAT_L4_0_WB | MTL_3_COH_2W);
/*
* Remaining PAT entries are left at the hardware-default
* fully-cached setting
*/
}
static void xelpg_setup_private_ppat(struct intel_gt *gt)
{
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(0),
MTL_PPAT_L4_0_WB);
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(1),
MTL_PPAT_L4_1_WT);
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(2),
MTL_PPAT_L4_3_UC);
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(3),
MTL_PPAT_L4_0_WB | MTL_2_COH_1W);
intel_gt_mcr_multicast_write(gt, XEHP_PAT_INDEX(4),
MTL_PPAT_L4_0_WB | MTL_3_COH_2W);
/*
* Remaining PAT entries are left at the hardware-default
* fully-cached setting
*/
}
static void tgl_setup_private_ppat(struct intel_uncore *uncore)
{
/* TGL doesn't support LLC or AGE settings */
@ -603,7 +641,14 @@ void setup_private_pat(struct intel_gt *gt)
GEM_BUG_ON(GRAPHICS_VER(i915) < 8);
if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50))
if (gt->type == GT_MEDIA) {
xelpmp_setup_private_ppat(gt->uncore);
return;
}
if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 70))
xelpg_setup_private_ppat(gt);
else if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50))
xehp_setup_private_ppat(gt);
else if (GRAPHICS_VER(i915) >= 12)
tgl_setup_private_ppat(uncore);

View File

@ -88,9 +88,17 @@ typedef u64 gen8_pte_t;
#define BYT_PTE_SNOOPED_BY_CPU_CACHES REG_BIT(2)
#define BYT_PTE_WRITEABLE REG_BIT(1)
#define MTL_PPGTT_PTE_PAT3 BIT_ULL(62)
#define GEN12_PPGTT_PTE_LM BIT_ULL(11)
#define GEN12_PPGTT_PTE_PAT2 BIT_ULL(7)
#define GEN12_PPGTT_PTE_PAT1 BIT_ULL(4)
#define GEN12_PPGTT_PTE_PAT0 BIT_ULL(3)
#define GEN12_GGTT_PTE_LM BIT_ULL(1)
#define GEN12_GGTT_PTE_LM BIT_ULL(1)
#define MTL_GGTT_PTE_PAT0 BIT_ULL(52)
#define MTL_GGTT_PTE_PAT1 BIT_ULL(53)
#define GEN12_GGTT_PTE_ADDR_MASK GENMASK_ULL(45, 12)
#define MTL_GGTT_PTE_PAT_MASK GENMASK_ULL(53, 52)
#define GEN12_PDE_64K BIT(6)
#define GEN12_PTE_PS64 BIT(8)
@ -147,7 +155,13 @@ typedef u64 gen8_pte_t;
#define GEN8_PDE_IPS_64K BIT(11)
#define GEN8_PDE_PS_2M BIT(7)
enum i915_cache_level;
#define MTL_PPAT_L4_CACHE_POLICY_MASK REG_GENMASK(3, 2)
#define MTL_PAT_INDEX_COH_MODE_MASK REG_GENMASK(1, 0)
#define MTL_PPAT_L4_3_UC REG_FIELD_PREP(MTL_PPAT_L4_CACHE_POLICY_MASK, 3)
#define MTL_PPAT_L4_1_WT REG_FIELD_PREP(MTL_PPAT_L4_CACHE_POLICY_MASK, 1)
#define MTL_PPAT_L4_0_WB REG_FIELD_PREP(MTL_PPAT_L4_CACHE_POLICY_MASK, 0)
#define MTL_3_COH_2W REG_FIELD_PREP(MTL_PAT_INDEX_COH_MODE_MASK, 3)
#define MTL_2_COH_1W REG_FIELD_PREP(MTL_PAT_INDEX_COH_MODE_MASK, 2)
struct drm_i915_gem_object;
struct i915_fence_reg;
@ -216,7 +230,7 @@ struct i915_vma_ops {
void (*bind_vma)(struct i915_address_space *vm,
struct i915_vm_pt_stash *stash,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags);
/*
* Unmap an object from an address space. This usually consists of
@ -288,7 +302,7 @@ struct i915_address_space {
(*alloc_scratch_dma)(struct i915_address_space *vm, int sz);
u64 (*pte_encode)(dma_addr_t addr,
enum i915_cache_level level,
unsigned int pat_index,
u32 flags); /* Create a valid PTE */
#define PTE_READ_ONLY BIT(0)
#define PTE_LM BIT(1)
@ -303,20 +317,20 @@ struct i915_address_space {
void (*insert_page)(struct i915_address_space *vm,
dma_addr_t addr,
u64 offset,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags);
void (*insert_entries)(struct i915_address_space *vm,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags);
void (*raw_insert_page)(struct i915_address_space *vm,
dma_addr_t addr,
u64 offset,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags);
void (*raw_insert_entries)(struct i915_address_space *vm,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags);
void (*cleanup)(struct i915_address_space *vm);
@ -493,7 +507,7 @@ static inline void i915_vm_put(struct i915_address_space *vm)
/**
* i915_vm_resv_put - Release a reference on the vm's reservation lock
* @resv: Pointer to a reservation lock obtained from i915_vm_resv_get()
* @vm: The vm whose reservation lock reference we want to release
*/
static inline void i915_vm_resv_put(struct i915_address_space *vm)
{
@ -563,7 +577,7 @@ void ppgtt_init(struct i915_ppgtt *ppgtt, struct intel_gt *gt,
void intel_ggtt_bind_vma(struct i915_address_space *vm,
struct i915_vm_pt_stash *stash,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags);
void intel_ggtt_unbind_vma(struct i915_address_space *vm,
struct i915_vma_resource *vma_res);
@ -641,7 +655,7 @@ void gen6_ggtt_invalidate(struct i915_ggtt *ggtt);
void ppgtt_bind_vma(struct i915_address_space *vm,
struct i915_vm_pt_stash *stash,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags);
void ppgtt_unbind_vma(struct i915_address_space *vm,
struct i915_vma_resource *vma_res);

View File

@ -1370,7 +1370,9 @@ gen12_emit_indirect_ctx_rcs(const struct intel_context *ce, u32 *cs)
cs, GEN12_GFX_CCS_AUX_NV);
/* Wa_16014892111 */
if (IS_DG2(ce->engine->i915))
if (IS_MTL_GRAPHICS_STEP(ce->engine->i915, M, STEP_A0, STEP_B0) ||
IS_MTL_GRAPHICS_STEP(ce->engine->i915, P, STEP_A0, STEP_B0) ||
IS_DG2(ce->engine->i915))
cs = dg2_emit_draw_watermark_setting(cs);
return cs;

View File

@ -45,7 +45,9 @@ static void xehpsdv_toggle_pdes(struct i915_address_space *vm,
* Insert a dummy PTE into every PT that will map to LMEM to ensure
* we have a correctly setup PDE structure for later use.
*/
vm->insert_page(vm, 0, d->offset, I915_CACHE_NONE, PTE_LM);
vm->insert_page(vm, 0, d->offset,
i915_gem_get_pat_index(vm->i915, I915_CACHE_NONE),
PTE_LM);
GEM_BUG_ON(!pt->is_compact);
d->offset += SZ_2M;
}
@ -63,7 +65,9 @@ static void xehpsdv_insert_pte(struct i915_address_space *vm,
* alignment is 64K underneath for the pt, and we are careful
* not to access the space in the void.
*/
vm->insert_page(vm, px_dma(pt), d->offset, I915_CACHE_NONE, PTE_LM);
vm->insert_page(vm, px_dma(pt), d->offset,
i915_gem_get_pat_index(vm->i915, I915_CACHE_NONE),
PTE_LM);
d->offset += SZ_64K;
}
@ -73,7 +77,8 @@ static void insert_pte(struct i915_address_space *vm,
{
struct insert_pte_data *d = data;
vm->insert_page(vm, px_dma(pt), d->offset, I915_CACHE_NONE,
vm->insert_page(vm, px_dma(pt), d->offset,
i915_gem_get_pat_index(vm->i915, I915_CACHE_NONE),
i915_gem_object_is_lmem(pt->base) ? PTE_LM : 0);
d->offset += PAGE_SIZE;
}
@ -356,13 +361,13 @@ static int max_pte_pkt_size(struct i915_request *rq, int pkt)
static int emit_pte(struct i915_request *rq,
struct sgt_dma *it,
enum i915_cache_level cache_level,
unsigned int pat_index,
bool is_lmem,
u64 offset,
int length)
{
bool has_64K_pages = HAS_64K_PAGES(rq->engine->i915);
const u64 encode = rq->context->vm->pte_encode(0, cache_level,
const u64 encode = rq->context->vm->pte_encode(0, pat_index,
is_lmem ? PTE_LM : 0);
struct intel_ring *ring = rq->ring;
int pkt, dword_length;
@ -673,17 +678,17 @@ int
intel_context_migrate_copy(struct intel_context *ce,
const struct i915_deps *deps,
struct scatterlist *src,
enum i915_cache_level src_cache_level,
unsigned int src_pat_index,
bool src_is_lmem,
struct scatterlist *dst,
enum i915_cache_level dst_cache_level,
unsigned int dst_pat_index,
bool dst_is_lmem,
struct i915_request **out)
{
struct sgt_dma it_src = sg_sgt(src), it_dst = sg_sgt(dst), it_ccs;
struct drm_i915_private *i915 = ce->engine->i915;
u64 ccs_bytes_to_cpy = 0, bytes_to_cpy;
enum i915_cache_level ccs_cache_level;
unsigned int ccs_pat_index;
u32 src_offset, dst_offset;
u8 src_access, dst_access;
struct i915_request *rq;
@ -707,12 +712,12 @@ intel_context_migrate_copy(struct intel_context *ce,
dst_sz = scatter_list_length(dst);
if (src_is_lmem) {
it_ccs = it_dst;
ccs_cache_level = dst_cache_level;
ccs_pat_index = dst_pat_index;
ccs_is_src = false;
} else if (dst_is_lmem) {
bytes_to_cpy = dst_sz;
it_ccs = it_src;
ccs_cache_level = src_cache_level;
ccs_pat_index = src_pat_index;
ccs_is_src = true;
}
@ -773,7 +778,7 @@ intel_context_migrate_copy(struct intel_context *ce,
src_sz = calculate_chunk_sz(i915, src_is_lmem,
bytes_to_cpy, ccs_bytes_to_cpy);
len = emit_pte(rq, &it_src, src_cache_level, src_is_lmem,
len = emit_pte(rq, &it_src, src_pat_index, src_is_lmem,
src_offset, src_sz);
if (!len) {
err = -EINVAL;
@ -784,7 +789,7 @@ intel_context_migrate_copy(struct intel_context *ce,
goto out_rq;
}
err = emit_pte(rq, &it_dst, dst_cache_level, dst_is_lmem,
err = emit_pte(rq, &it_dst, dst_pat_index, dst_is_lmem,
dst_offset, len);
if (err < 0)
goto out_rq;
@ -811,7 +816,7 @@ intel_context_migrate_copy(struct intel_context *ce,
goto out_rq;
ccs_sz = GET_CCS_BYTES(i915, len);
err = emit_pte(rq, &it_ccs, ccs_cache_level, false,
err = emit_pte(rq, &it_ccs, ccs_pat_index, false,
ccs_is_src ? src_offset : dst_offset,
ccs_sz);
if (err < 0)
@ -920,7 +925,7 @@ static int emit_clear(struct i915_request *rq, u32 offset, int size,
GEM_BUG_ON(size >> PAGE_SHIFT > S16_MAX);
if (HAS_FLAT_CCS(i915) && ver >= 12)
if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50))
ring_sz = XY_FAST_COLOR_BLT_DW;
else if (ver >= 8)
ring_sz = 8;
@ -931,7 +936,7 @@ static int emit_clear(struct i915_request *rq, u32 offset, int size,
if (IS_ERR(cs))
return PTR_ERR(cs);
if (HAS_FLAT_CCS(i915) && ver >= 12) {
if (GRAPHICS_VER_FULL(i915) >= IP_VER(12, 50)) {
*cs++ = XY_FAST_COLOR_BLT_CMD | XY_FAST_COLOR_BLT_DEPTH_32 |
(XY_FAST_COLOR_BLT_DW - 2);
*cs++ = FIELD_PREP(XY_FAST_COLOR_BLT_MOCS_MASK, mocs) |
@ -979,7 +984,7 @@ int
intel_context_migrate_clear(struct intel_context *ce,
const struct i915_deps *deps,
struct scatterlist *sg,
enum i915_cache_level cache_level,
unsigned int pat_index,
bool is_lmem,
u32 value,
struct i915_request **out)
@ -1027,7 +1032,7 @@ intel_context_migrate_clear(struct intel_context *ce,
if (err)
goto out_rq;
len = emit_pte(rq, &it, cache_level, is_lmem, offset, CHUNK_SZ);
len = emit_pte(rq, &it, pat_index, is_lmem, offset, CHUNK_SZ);
if (len <= 0) {
err = len;
goto out_rq;
@ -1074,10 +1079,10 @@ int intel_migrate_copy(struct intel_migrate *m,
struct i915_gem_ww_ctx *ww,
const struct i915_deps *deps,
struct scatterlist *src,
enum i915_cache_level src_cache_level,
unsigned int src_pat_index,
bool src_is_lmem,
struct scatterlist *dst,
enum i915_cache_level dst_cache_level,
unsigned int dst_pat_index,
bool dst_is_lmem,
struct i915_request **out)
{
@ -1098,8 +1103,8 @@ int intel_migrate_copy(struct intel_migrate *m,
goto out;
err = intel_context_migrate_copy(ce, deps,
src, src_cache_level, src_is_lmem,
dst, dst_cache_level, dst_is_lmem,
src, src_pat_index, src_is_lmem,
dst, dst_pat_index, dst_is_lmem,
out);
intel_context_unpin(ce);
@ -1113,7 +1118,7 @@ intel_migrate_clear(struct intel_migrate *m,
struct i915_gem_ww_ctx *ww,
const struct i915_deps *deps,
struct scatterlist *sg,
enum i915_cache_level cache_level,
unsigned int pat_index,
bool is_lmem,
u32 value,
struct i915_request **out)
@ -1134,7 +1139,7 @@ intel_migrate_clear(struct intel_migrate *m,
if (err)
goto out;
err = intel_context_migrate_clear(ce, deps, sg, cache_level,
err = intel_context_migrate_clear(ce, deps, sg, pat_index,
is_lmem, value, out);
intel_context_unpin(ce);

View File

@ -16,7 +16,6 @@ struct i915_request;
struct i915_gem_ww_ctx;
struct intel_gt;
struct scatterlist;
enum i915_cache_level;
int intel_migrate_init(struct intel_migrate *m, struct intel_gt *gt);
@ -26,20 +25,20 @@ int intel_migrate_copy(struct intel_migrate *m,
struct i915_gem_ww_ctx *ww,
const struct i915_deps *deps,
struct scatterlist *src,
enum i915_cache_level src_cache_level,
unsigned int src_pat_index,
bool src_is_lmem,
struct scatterlist *dst,
enum i915_cache_level dst_cache_level,
unsigned int dst_pat_index,
bool dst_is_lmem,
struct i915_request **out);
int intel_context_migrate_copy(struct intel_context *ce,
const struct i915_deps *deps,
struct scatterlist *src,
enum i915_cache_level src_cache_level,
unsigned int src_pat_index,
bool src_is_lmem,
struct scatterlist *dst,
enum i915_cache_level dst_cache_level,
unsigned int dst_pat_index,
bool dst_is_lmem,
struct i915_request **out);
@ -48,7 +47,7 @@ intel_migrate_clear(struct intel_migrate *m,
struct i915_gem_ww_ctx *ww,
const struct i915_deps *deps,
struct scatterlist *sg,
enum i915_cache_level cache_level,
unsigned int pat_index,
bool is_lmem,
u32 value,
struct i915_request **out);
@ -56,7 +55,7 @@ int
intel_context_migrate_clear(struct intel_context *ce,
const struct i915_deps *deps,
struct scatterlist *sg,
enum i915_cache_level cache_level,
unsigned int pat_index,
bool is_lmem,
u32 value,
struct i915_request **out);

View File

@ -40,6 +40,10 @@ struct drm_i915_mocs_table {
#define LE_COS(value) ((value) << 15)
#define LE_SSE(value) ((value) << 17)
/* Defines for the tables (GLOB_MOCS_0 - GLOB_MOCS_16) */
#define _L4_CACHEABILITY(value) ((value) << 2)
#define IG_PAT(value) ((value) << 8)
/* Defines for the tables (LNCFMOCS0 - LNCFMOCS31) - two entries per word */
#define L3_ESC(value) ((value) << 0)
#define L3_SCC(value) ((value) << 1)
@ -50,6 +54,7 @@ struct drm_i915_mocs_table {
/* Helper defines */
#define GEN9_NUM_MOCS_ENTRIES 64 /* 63-64 are reserved, but configured. */
#define PVC_NUM_MOCS_ENTRIES 3
#define MTL_NUM_MOCS_ENTRIES 16
/* (e)LLC caching options */
/*
@ -73,6 +78,12 @@ struct drm_i915_mocs_table {
#define L3_2_RESERVED _L3_CACHEABILITY(2)
#define L3_3_WB _L3_CACHEABILITY(3)
/* L4 caching options */
#define L4_0_WB _L4_CACHEABILITY(0)
#define L4_1_WT _L4_CACHEABILITY(1)
#define L4_2_RESERVED _L4_CACHEABILITY(2)
#define L4_3_UC _L4_CACHEABILITY(3)
#define MOCS_ENTRY(__idx, __control_value, __l3cc_value) \
[__idx] = { \
.control_value = __control_value, \
@ -416,6 +427,57 @@ static const struct drm_i915_mocs_entry pvc_mocs_table[] = {
MOCS_ENTRY(2, 0, L3_3_WB),
};
static const struct drm_i915_mocs_entry mtl_mocs_table[] = {
/* Error - Reserved for Non-Use */
MOCS_ENTRY(0,
IG_PAT(0),
L3_LKUP(1) | L3_3_WB),
/* Cached - L3 + L4 */
MOCS_ENTRY(1,
IG_PAT(1),
L3_LKUP(1) | L3_3_WB),
/* L4 - GO:L3 */
MOCS_ENTRY(2,
IG_PAT(1),
L3_LKUP(1) | L3_1_UC),
/* Uncached - GO:L3 */
MOCS_ENTRY(3,
IG_PAT(1) | L4_3_UC,
L3_LKUP(1) | L3_1_UC),
/* L4 - GO:Mem */
MOCS_ENTRY(4,
IG_PAT(1),
L3_LKUP(1) | L3_GLBGO(1) | L3_1_UC),
/* Uncached - GO:Mem */
MOCS_ENTRY(5,
IG_PAT(1) | L4_3_UC,
L3_LKUP(1) | L3_GLBGO(1) | L3_1_UC),
/* L4 - L3:NoLKUP; GO:L3 */
MOCS_ENTRY(6,
IG_PAT(1),
L3_1_UC),
/* Uncached - L3:NoLKUP; GO:L3 */
MOCS_ENTRY(7,
IG_PAT(1) | L4_3_UC,
L3_1_UC),
/* L4 - L3:NoLKUP; GO:Mem */
MOCS_ENTRY(8,
IG_PAT(1),
L3_GLBGO(1) | L3_1_UC),
/* Uncached - L3:NoLKUP; GO:Mem */
MOCS_ENTRY(9,
IG_PAT(1) | L4_3_UC,
L3_GLBGO(1) | L3_1_UC),
/* Display - L3; L4:WT */
MOCS_ENTRY(14,
IG_PAT(1) | L4_1_WT,
L3_LKUP(1) | L3_3_WB),
/* CCS - Non-Displayable */
MOCS_ENTRY(15,
IG_PAT(1),
L3_GLBGO(1) | L3_1_UC),
};
enum {
HAS_GLOBAL_MOCS = BIT(0),
HAS_ENGINE_MOCS = BIT(1),
@ -445,7 +507,13 @@ static unsigned int get_mocs_settings(const struct drm_i915_private *i915,
memset(table, 0, sizeof(struct drm_i915_mocs_table));
table->unused_entries_index = I915_MOCS_PTE;
if (IS_PONTEVECCHIO(i915)) {
if (IS_METEORLAKE(i915)) {
table->size = ARRAY_SIZE(mtl_mocs_table);
table->table = mtl_mocs_table;
table->n_entries = MTL_NUM_MOCS_ENTRIES;
table->uc_index = 9;
table->unused_entries_index = 1;
} else if (IS_PONTEVECCHIO(i915)) {
table->size = ARRAY_SIZE(pvc_mocs_table);
table->table = pvc_mocs_table;
table->n_entries = PVC_NUM_MOCS_ENTRIES;

View File

@ -181,7 +181,7 @@ struct i915_ppgtt *i915_ppgtt_create(struct intel_gt *gt,
void ppgtt_bind_vma(struct i915_address_space *vm,
struct i915_vm_pt_stash *stash,
struct i915_vma_resource *vma_res,
enum i915_cache_level cache_level,
unsigned int pat_index,
u32 flags)
{
u32 pte_flags;
@ -199,7 +199,7 @@ void ppgtt_bind_vma(struct i915_address_space *vm,
if (vma_res->bi.lmem)
pte_flags |= PTE_LM;
vm->insert_entries(vm, vma_res, cache_level, pte_flags);
vm->insert_entries(vm, vma_res, pat_index, pte_flags);
wmb();
}

View File

@ -53,11 +53,6 @@ static struct drm_i915_private *rc6_to_i915(struct intel_rc6 *rc)
return rc6_to_gt(rc)->i915;
}
static void set(struct intel_uncore *uncore, i915_reg_t reg, u32 val)
{
intel_uncore_write_fw(uncore, reg, val);
}
static void gen11_rc6_enable(struct intel_rc6 *rc6)
{
struct intel_gt *gt = rc6_to_gt(rc6);
@ -72,19 +67,19 @@ static void gen11_rc6_enable(struct intel_rc6 *rc6)
*/
if (!intel_uc_uses_guc_rc(&gt->uc)) {
/* 2b: Program RC6 thresholds.*/
set(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 54 << 16 | 85);
set(uncore, GEN10_MEDIA_WAKE_RATE_LIMIT, 150);
intel_uncore_write_fw(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 54 << 16 | 85);
intel_uncore_write_fw(uncore, GEN10_MEDIA_WAKE_RATE_LIMIT, 150);
set(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000); /* 12500 * 1280ns */
set(uncore, GEN6_RC_IDLE_HYSTERSIS, 25); /* 25 * 1280ns */
intel_uncore_write_fw(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000); /* 12500 * 1280ns */
intel_uncore_write_fw(uncore, GEN6_RC_IDLE_HYSTERSIS, 25); /* 25 * 1280ns */
for_each_engine(engine, rc6_to_gt(rc6), id)
set(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
intel_uncore_write_fw(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
set(uncore, GUC_MAX_IDLE_COUNT, 0xA);
intel_uncore_write_fw(uncore, GUC_MAX_IDLE_COUNT, 0xA);
set(uncore, GEN6_RC_SLEEP, 0);
intel_uncore_write_fw(uncore, GEN6_RC_SLEEP, 0);
set(uncore, GEN6_RC6_THRESHOLD, 50000); /* 50/125ms per EI */
intel_uncore_write_fw(uncore, GEN6_RC6_THRESHOLD, 50000); /* 50/125ms per EI */
}
/*
@ -105,8 +100,8 @@ static void gen11_rc6_enable(struct intel_rc6 *rc6)
* Broadwell+, To be conservative, we want to factor in a context
* switch on top (due to ksoftirqd).
*/
set(uncore, GEN9_MEDIA_PG_IDLE_HYSTERESIS, 60);
set(uncore, GEN9_RENDER_PG_IDLE_HYSTERESIS, 60);
intel_uncore_write_fw(uncore, GEN9_MEDIA_PG_IDLE_HYSTERESIS, 60);
intel_uncore_write_fw(uncore, GEN9_RENDER_PG_IDLE_HYSTERESIS, 60);
/* 3a: Enable RC6
*
@ -122,8 +117,14 @@ static void gen11_rc6_enable(struct intel_rc6 *rc6)
GEN6_RC_CTL_RC6_ENABLE |
GEN6_RC_CTL_EI_MODE(1);
/* Wa_16011777198 - Render powergating must remain disabled */
if (IS_DG2_GRAPHICS_STEP(gt->i915, G10, STEP_A0, STEP_C0) ||
/*
* Wa_16011777198 and BSpec 52698 - Render powergating must be off.
* FIXME BSpec is outdated, disabling powergating for MTL is just
* temporary wa and should be removed after fixing real cause
* of forcewake timeouts.
*/
if (IS_METEORLAKE(gt->i915) ||
IS_DG2_GRAPHICS_STEP(gt->i915, G10, STEP_A0, STEP_C0) ||
IS_DG2_GRAPHICS_STEP(gt->i915, G11, STEP_A0, STEP_B0))
pg_enable =
GEN9_MEDIA_PG_ENABLE |
@ -141,7 +142,7 @@ static void gen11_rc6_enable(struct intel_rc6 *rc6)
VDN_MFX_POWERGATE_ENABLE(i));
}
set(uncore, GEN9_PG_ENABLE, pg_enable);
intel_uncore_write_fw(uncore, GEN9_PG_ENABLE, pg_enable);
}
static void gen9_rc6_enable(struct intel_rc6 *rc6)
@ -152,26 +153,26 @@ static void gen9_rc6_enable(struct intel_rc6 *rc6)
/* 2b: Program RC6 thresholds.*/
if (GRAPHICS_VER(rc6_to_i915(rc6)) >= 11) {
set(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 54 << 16 | 85);
set(uncore, GEN10_MEDIA_WAKE_RATE_LIMIT, 150);
intel_uncore_write_fw(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 54 << 16 | 85);
intel_uncore_write_fw(uncore, GEN10_MEDIA_WAKE_RATE_LIMIT, 150);
} else if (IS_SKYLAKE(rc6_to_i915(rc6))) {
/*
* WaRsDoubleRc6WrlWithCoarsePowerGating:skl Doubling WRL only
* when CPG is enabled
*/
set(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 108 << 16);
intel_uncore_write_fw(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 108 << 16);
} else {
set(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 54 << 16);
intel_uncore_write_fw(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 54 << 16);
}
set(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000); /* 12500 * 1280ns */
set(uncore, GEN6_RC_IDLE_HYSTERSIS, 25); /* 25 * 1280ns */
intel_uncore_write_fw(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000); /* 12500 * 1280ns */
intel_uncore_write_fw(uncore, GEN6_RC_IDLE_HYSTERSIS, 25); /* 25 * 1280ns */
for_each_engine(engine, rc6_to_gt(rc6), id)
set(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
intel_uncore_write_fw(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
set(uncore, GUC_MAX_IDLE_COUNT, 0xA);
intel_uncore_write_fw(uncore, GUC_MAX_IDLE_COUNT, 0xA);
set(uncore, GEN6_RC_SLEEP, 0);
intel_uncore_write_fw(uncore, GEN6_RC_SLEEP, 0);
/*
* 2c: Program Coarse Power Gating Policies.
@ -194,11 +195,11 @@ static void gen9_rc6_enable(struct intel_rc6 *rc6)
* conservative, we have to factor in a context switch on top (due
* to ksoftirqd).
*/
set(uncore, GEN9_MEDIA_PG_IDLE_HYSTERESIS, 250);
set(uncore, GEN9_RENDER_PG_IDLE_HYSTERESIS, 250);
intel_uncore_write_fw(uncore, GEN9_MEDIA_PG_IDLE_HYSTERESIS, 250);
intel_uncore_write_fw(uncore, GEN9_RENDER_PG_IDLE_HYSTERESIS, 250);
/* 3a: Enable RC6 */
set(uncore, GEN6_RC6_THRESHOLD, 37500); /* 37.5/125ms per EI */
intel_uncore_write_fw(uncore, GEN6_RC6_THRESHOLD, 37500); /* 37.5/125ms per EI */
rc6->ctl_enable =
GEN6_RC_CTL_HW_ENABLE |
@ -210,8 +211,8 @@ static void gen9_rc6_enable(struct intel_rc6 *rc6)
* - Render/Media PG need to be disabled with RC6.
*/
if (!NEEDS_WaRsDisableCoarsePowerGating(rc6_to_i915(rc6)))
set(uncore, GEN9_PG_ENABLE,
GEN9_RENDER_PG_ENABLE | GEN9_MEDIA_PG_ENABLE);
intel_uncore_write_fw(uncore, GEN9_PG_ENABLE,
GEN9_RENDER_PG_ENABLE | GEN9_MEDIA_PG_ENABLE);
}
static void gen8_rc6_enable(struct intel_rc6 *rc6)
@ -221,13 +222,13 @@ static void gen8_rc6_enable(struct intel_rc6 *rc6)
enum intel_engine_id id;
/* 2b: Program RC6 thresholds.*/
set(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 40 << 16);
set(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000); /* 12500 * 1280ns */
set(uncore, GEN6_RC_IDLE_HYSTERSIS, 25); /* 25 * 1280ns */
intel_uncore_write_fw(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 40 << 16);
intel_uncore_write_fw(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000); /* 12500 * 1280ns */
intel_uncore_write_fw(uncore, GEN6_RC_IDLE_HYSTERSIS, 25); /* 25 * 1280ns */
for_each_engine(engine, rc6_to_gt(rc6), id)
set(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
set(uncore, GEN6_RC_SLEEP, 0);
set(uncore, GEN6_RC6_THRESHOLD, 625); /* 800us/1.28 for TO */
intel_uncore_write_fw(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
intel_uncore_write_fw(uncore, GEN6_RC_SLEEP, 0);
intel_uncore_write_fw(uncore, GEN6_RC6_THRESHOLD, 625); /* 800us/1.28 for TO */
/* 3: Enable RC6 */
rc6->ctl_enable =
@ -245,20 +246,20 @@ static void gen6_rc6_enable(struct intel_rc6 *rc6)
u32 rc6vids, rc6_mask;
int ret;
set(uncore, GEN6_RC1_WAKE_RATE_LIMIT, 1000 << 16);
set(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 40 << 16 | 30);
set(uncore, GEN6_RC6pp_WAKE_RATE_LIMIT, 30);
set(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000);
set(uncore, GEN6_RC_IDLE_HYSTERSIS, 25);
intel_uncore_write_fw(uncore, GEN6_RC1_WAKE_RATE_LIMIT, 1000 << 16);
intel_uncore_write_fw(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 40 << 16 | 30);
intel_uncore_write_fw(uncore, GEN6_RC6pp_WAKE_RATE_LIMIT, 30);
intel_uncore_write_fw(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000);
intel_uncore_write_fw(uncore, GEN6_RC_IDLE_HYSTERSIS, 25);
for_each_engine(engine, rc6_to_gt(rc6), id)
set(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
intel_uncore_write_fw(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
set(uncore, GEN6_RC_SLEEP, 0);
set(uncore, GEN6_RC1e_THRESHOLD, 1000);
set(uncore, GEN6_RC6_THRESHOLD, 50000);
set(uncore, GEN6_RC6p_THRESHOLD, 150000);
set(uncore, GEN6_RC6pp_THRESHOLD, 64000); /* unused */
intel_uncore_write_fw(uncore, GEN6_RC_SLEEP, 0);
intel_uncore_write_fw(uncore, GEN6_RC1e_THRESHOLD, 1000);
intel_uncore_write_fw(uncore, GEN6_RC6_THRESHOLD, 50000);
intel_uncore_write_fw(uncore, GEN6_RC6p_THRESHOLD, 150000);
intel_uncore_write_fw(uncore, GEN6_RC6pp_THRESHOLD, 64000); /* unused */
/* We don't use those on Haswell */
rc6_mask = GEN6_RC_CTL_RC6_ENABLE;
@ -372,22 +373,22 @@ static void chv_rc6_enable(struct intel_rc6 *rc6)
enum intel_engine_id id;
/* 2a: Program RC6 thresholds.*/
set(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 40 << 16);
set(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000); /* 12500 * 1280ns */
set(uncore, GEN6_RC_IDLE_HYSTERSIS, 25); /* 25 * 1280ns */
intel_uncore_write_fw(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 40 << 16);
intel_uncore_write_fw(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000); /* 12500 * 1280ns */
intel_uncore_write_fw(uncore, GEN6_RC_IDLE_HYSTERSIS, 25); /* 25 * 1280ns */
for_each_engine(engine, rc6_to_gt(rc6), id)
set(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
set(uncore, GEN6_RC_SLEEP, 0);
intel_uncore_write_fw(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
intel_uncore_write_fw(uncore, GEN6_RC_SLEEP, 0);
/* TO threshold set to 500 us (0x186 * 1.28 us) */
set(uncore, GEN6_RC6_THRESHOLD, 0x186);
intel_uncore_write_fw(uncore, GEN6_RC6_THRESHOLD, 0x186);
/* Allows RC6 residency counter to work */
set(uncore, VLV_COUNTER_CONTROL,
_MASKED_BIT_ENABLE(VLV_COUNT_RANGE_HIGH |
VLV_MEDIA_RC6_COUNT_EN |
VLV_RENDER_RC6_COUNT_EN));
intel_uncore_write_fw(uncore, VLV_COUNTER_CONTROL,
_MASKED_BIT_ENABLE(VLV_COUNT_RANGE_HIGH |
VLV_MEDIA_RC6_COUNT_EN |
VLV_RENDER_RC6_COUNT_EN));
/* 3: Enable RC6 */
rc6->ctl_enable = GEN7_RC_CTL_TO_MODE;
@ -399,22 +400,22 @@ static void vlv_rc6_enable(struct intel_rc6 *rc6)
struct intel_engine_cs *engine;
enum intel_engine_id id;
set(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 0x00280000);
set(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000);
set(uncore, GEN6_RC_IDLE_HYSTERSIS, 25);
intel_uncore_write_fw(uncore, GEN6_RC6_WAKE_RATE_LIMIT, 0x00280000);
intel_uncore_write_fw(uncore, GEN6_RC_EVALUATION_INTERVAL, 125000);
intel_uncore_write_fw(uncore, GEN6_RC_IDLE_HYSTERSIS, 25);
for_each_engine(engine, rc6_to_gt(rc6), id)
set(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
intel_uncore_write_fw(uncore, RING_MAX_IDLE(engine->mmio_base), 10);
set(uncore, GEN6_RC6_THRESHOLD, 0x557);
intel_uncore_write_fw(uncore, GEN6_RC6_THRESHOLD, 0x557);
/* Allows RC6 residency counter to work */
set(uncore, VLV_COUNTER_CONTROL,
_MASKED_BIT_ENABLE(VLV_COUNT_RANGE_HIGH |
VLV_MEDIA_RC0_COUNT_EN |
VLV_RENDER_RC0_COUNT_EN |
VLV_MEDIA_RC6_COUNT_EN |
VLV_RENDER_RC6_COUNT_EN));
intel_uncore_write_fw(uncore, VLV_COUNTER_CONTROL,
_MASKED_BIT_ENABLE(VLV_COUNT_RANGE_HIGH |
VLV_MEDIA_RC0_COUNT_EN |
VLV_RENDER_RC0_COUNT_EN |
VLV_MEDIA_RC6_COUNT_EN |
VLV_RENDER_RC6_COUNT_EN));
rc6->ctl_enable =
GEN7_RC_CTL_TO_MODE | VLV_RC_CTL_CTX_RST_PARALLEL;
@ -575,9 +576,9 @@ static void __intel_rc6_disable(struct intel_rc6 *rc6)
intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
if (GRAPHICS_VER(i915) >= 9)
set(uncore, GEN9_PG_ENABLE, 0);
set(uncore, GEN6_RC_CONTROL, 0);
set(uncore, GEN6_RC_STATE, 0);
intel_uncore_write_fw(uncore, GEN9_PG_ENABLE, 0);
intel_uncore_write_fw(uncore, GEN6_RC_CONTROL, 0);
intel_uncore_write_fw(uncore, GEN6_RC_STATE, 0);
intel_uncore_forcewake_put(uncore, FORCEWAKE_ALL);
}
@ -684,7 +685,7 @@ void intel_rc6_unpark(struct intel_rc6 *rc6)
return;
/* Restore HW timers for automatic RC6 entry while busy */
set(uncore, GEN6_RC_CONTROL, rc6->ctl_enable);
intel_uncore_write_fw(uncore, GEN6_RC_CONTROL, rc6->ctl_enable);
}
void intel_rc6_park(struct intel_rc6 *rc6)
@ -704,7 +705,7 @@ void intel_rc6_park(struct intel_rc6 *rc6)
return;
/* Turn off the HW timers and go directly to rc6 */
set(uncore, GEN6_RC_CONTROL, GEN6_RC_CTL_RC6_ENABLE);
intel_uncore_write_fw(uncore, GEN6_RC_CONTROL, GEN6_RC_CTL_RC6_ENABLE);
if (HAS_RC6pp(rc6_to_i915(rc6)))
target = 0x6; /* deepest rc6 */
@ -712,7 +713,7 @@ void intel_rc6_park(struct intel_rc6 *rc6)
target = 0x5; /* deep rc6 */
else
target = 0x4; /* normal rc6 */
set(uncore, GEN6_RC_STATE, target << RC_SW_TARGET_STATE_SHIFT);
intel_uncore_write_fw(uncore, GEN6_RC_STATE, target << RC_SW_TARGET_STATE_SHIFT);
}
void intel_rc6_disable(struct intel_rc6 *rc6)
@ -735,7 +736,7 @@ void intel_rc6_fini(struct intel_rc6 *rc6)
/* We want the BIOS C6 state preserved across loads for MTL */
if (IS_METEORLAKE(rc6_to_i915(rc6)) && rc6->bios_state_captured)
set(uncore, GEN6_RC_STATE, rc6->bios_rc_state);
intel_uncore_write_fw(uncore, GEN6_RC_STATE, rc6->bios_rc_state);
pctx = fetch_and_zero(&rc6->pctx);
if (pctx)
@ -766,18 +767,18 @@ static u64 vlv_residency_raw(struct intel_uncore *uncore, const i915_reg_t reg)
* before we have set the default VLV_COUNTER_CONTROL value. So always
* set the high bit to be safe.
*/
set(uncore, VLV_COUNTER_CONTROL,
_MASKED_BIT_ENABLE(VLV_COUNT_RANGE_HIGH));
intel_uncore_write_fw(uncore, VLV_COUNTER_CONTROL,
_MASKED_BIT_ENABLE(VLV_COUNT_RANGE_HIGH));
upper = intel_uncore_read_fw(uncore, reg);
do {
tmp = upper;
set(uncore, VLV_COUNTER_CONTROL,
_MASKED_BIT_DISABLE(VLV_COUNT_RANGE_HIGH));
intel_uncore_write_fw(uncore, VLV_COUNTER_CONTROL,
_MASKED_BIT_DISABLE(VLV_COUNT_RANGE_HIGH));
lower = intel_uncore_read_fw(uncore, reg);
set(uncore, VLV_COUNTER_CONTROL,
_MASKED_BIT_ENABLE(VLV_COUNT_RANGE_HIGH));
intel_uncore_write_fw(uncore, VLV_COUNTER_CONTROL,
_MASKED_BIT_ENABLE(VLV_COUNT_RANGE_HIGH));
upper = intel_uncore_read_fw(uncore, reg);
} while (upper != tmp && --loop);

View File

@ -812,11 +812,25 @@ static void dg2_ctx_workarounds_init(struct intel_engine_cs *engine,
wa_masked_en(wal, CACHE_MODE_1, MSAA_OPTIMIZATION_REDUC_DISABLE);
}
static void mtl_ctx_gt_tuning_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct drm_i915_private *i915 = engine->i915;
dg2_ctx_gt_tuning_init(engine, wal);
if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_B0, STEP_FOREVER) ||
IS_MTL_GRAPHICS_STEP(i915, P, STEP_B0, STEP_FOREVER))
wa_add(wal, DRAW_WATERMARK, VERT_WM_VAL, 0x3FF, 0, false);
}
static void mtl_ctx_workarounds_init(struct intel_engine_cs *engine,
struct i915_wa_list *wal)
{
struct drm_i915_private *i915 = engine->i915;
mtl_ctx_gt_tuning_init(engine, wal);
if (IS_MTL_GRAPHICS_STEP(i915, M, STEP_A0, STEP_B0) ||
IS_MTL_GRAPHICS_STEP(i915, P, STEP_A0, STEP_B0)) {
/* Wa_14014947963 */
@ -1695,14 +1709,20 @@ pvc_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
static void
xelpg_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
{
/* Wa_14018778641 / Wa_18018781329 */
wa_mcr_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB);
wa_mcr_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB);
/* Wa_22016670082 */
wa_write_or(wal, GEN12_SQCNT1, GEN12_STRICT_RAR_ENABLE);
if (IS_MTL_GRAPHICS_STEP(gt->i915, M, STEP_A0, STEP_B0) ||
IS_MTL_GRAPHICS_STEP(gt->i915, P, STEP_A0, STEP_B0)) {
/* Wa_14014830051 */
wa_mcr_write_clr(wal, SARB_CHICKEN1, COMP_CKN_IN);
/* Wa_18018781329 */
wa_mcr_write_or(wal, RENDER_MOD_CTRL, FORCE_MISS_FTLB);
wa_mcr_write_or(wal, COMP_MOD_CTRL, FORCE_MISS_FTLB);
/* Wa_14015795083 */
wa_write_clr(wal, GEN7_MISCCPCTL, GEN12_DOP_CLOCK_GATE_RENDER_ENABLE);
}
/*
@ -1715,17 +1735,16 @@ xelpg_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
static void
xelpmp_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
{
if (IS_MTL_MEDIA_STEP(gt->i915, STEP_A0, STEP_B0)) {
/*
* Wa_18018781329
*
* Note that although these registers are MCR on the primary
* GT, the media GT's versions are regular singleton registers.
*/
wa_write_or(wal, XELPMP_GSC_MOD_CTRL, FORCE_MISS_FTLB);
wa_write_or(wal, XELPMP_VDBX_MOD_CTRL, FORCE_MISS_FTLB);
wa_write_or(wal, XELPMP_VEBX_MOD_CTRL, FORCE_MISS_FTLB);
}
/*
* Wa_14018778641
* Wa_18018781329
*
* Note that although these registers are MCR on the primary
* GT, the media GT's versions are regular singleton registers.
*/
wa_write_or(wal, XELPMP_GSC_MOD_CTRL, FORCE_MISS_FTLB);
wa_write_or(wal, XELPMP_VDBX_MOD_CTRL, FORCE_MISS_FTLB);
wa_write_or(wal, XELPMP_VEBX_MOD_CTRL, FORCE_MISS_FTLB);
debug_dump_steering(gt);
}
@ -1743,6 +1762,13 @@ xelpmp_gt_workarounds_init(struct intel_gt *gt, struct i915_wa_list *wal)
*/
static void gt_tuning_settings(struct intel_gt *gt, struct i915_wa_list *wal)
{
if (IS_METEORLAKE(gt->i915)) {
if (gt->type != GT_MEDIA)
wa_mcr_write_or(wal, XEHP_L3SCQREG7, BLEND_FILL_CACHING_OPT_DIS);
wa_mcr_write_or(wal, XEHP_SQCM, EN_32B_ACCESS);
}
if (IS_PONTEVECCHIO(gt->i915)) {
wa_mcr_write(wal, XEHPC_L3SCRUB,
SCRUB_CL_DWNGRADE_SHARED | SCRUB_RATE_4B_PER_CLK);
@ -2939,7 +2965,7 @@ static void
add_render_compute_tuning_settings(struct drm_i915_private *i915,
struct i915_wa_list *wal)
{
if (IS_DG2(i915))
if (IS_METEORLAKE(i915) || IS_DG2(i915))
wa_mcr_write_clr_set(wal, RT_CTRL, STACKID_CTRL, STACKID_CTRL_512);
/*

View File

@ -5,6 +5,7 @@
#include <linux/sort.h>
#include "gt/intel_gt_print.h"
#include "i915_selftest.h"
#include "intel_engine_regs.h"
#include "intel_gpu_commands.h"
@ -402,7 +403,7 @@ static int live_engine_pm(void *arg)
/* gt wakeref is async (deferred to workqueue) */
if (intel_gt_pm_wait_for_idle(gt)) {
pr_err("GT failed to idle\n");
gt_err(gt, "GT failed to idle\n");
return -EINVAL;
}
}

View File

@ -137,7 +137,7 @@ err_free_src:
static int intel_context_copy_ccs(struct intel_context *ce,
const struct i915_deps *deps,
struct scatterlist *sg,
enum i915_cache_level cache_level,
unsigned int pat_index,
bool write_to_ccs,
struct i915_request **out)
{
@ -185,7 +185,7 @@ static int intel_context_copy_ccs(struct intel_context *ce,
if (err)
goto out_rq;
len = emit_pte(rq, &it, cache_level, true, offset, CHUNK_SZ);
len = emit_pte(rq, &it, pat_index, true, offset, CHUNK_SZ);
if (len <= 0) {
err = len;
goto out_rq;
@ -223,7 +223,7 @@ intel_migrate_ccs_copy(struct intel_migrate *m,
struct i915_gem_ww_ctx *ww,
const struct i915_deps *deps,
struct scatterlist *sg,
enum i915_cache_level cache_level,
unsigned int pat_index,
bool write_to_ccs,
struct i915_request **out)
{
@ -243,7 +243,7 @@ intel_migrate_ccs_copy(struct intel_migrate *m,
if (err)
goto out;
err = intel_context_copy_ccs(ce, deps, sg, cache_level,
err = intel_context_copy_ccs(ce, deps, sg, pat_index,
write_to_ccs, out);
intel_context_unpin(ce);
@ -300,7 +300,7 @@ static int clear(struct intel_migrate *migrate,
/* Write the obj data into ccs surface */
err = intel_migrate_ccs_copy(migrate, &ww, NULL,
obj->mm.pages->sgl,
obj->cache_level,
obj->pat_index,
true, &rq);
if (rq && !err) {
if (i915_request_wait(rq, 0, HZ) < 0) {
@ -351,7 +351,7 @@ static int clear(struct intel_migrate *migrate,
err = intel_migrate_ccs_copy(migrate, &ww, NULL,
obj->mm.pages->sgl,
obj->cache_level,
obj->pat_index,
false, &rq);
if (rq && !err) {
if (i915_request_wait(rq, 0, HZ) < 0) {
@ -414,9 +414,9 @@ static int __migrate_copy(struct intel_migrate *migrate,
struct i915_request **out)
{
return intel_migrate_copy(migrate, ww, NULL,
src->mm.pages->sgl, src->cache_level,
src->mm.pages->sgl, src->pat_index,
i915_gem_object_is_lmem(src),
dst->mm.pages->sgl, dst->cache_level,
dst->mm.pages->sgl, dst->pat_index,
i915_gem_object_is_lmem(dst),
out);
}
@ -428,9 +428,9 @@ static int __global_copy(struct intel_migrate *migrate,
struct i915_request **out)
{
return intel_context_migrate_copy(migrate->context, NULL,
src->mm.pages->sgl, src->cache_level,
src->mm.pages->sgl, src->pat_index,
i915_gem_object_is_lmem(src),
dst->mm.pages->sgl, dst->cache_level,
dst->mm.pages->sgl, dst->pat_index,
i915_gem_object_is_lmem(dst),
out);
}
@ -455,7 +455,7 @@ static int __migrate_clear(struct intel_migrate *migrate,
{
return intel_migrate_clear(migrate, ww, NULL,
obj->mm.pages->sgl,
obj->cache_level,
obj->pat_index,
i915_gem_object_is_lmem(obj),
value, out);
}
@ -468,7 +468,7 @@ static int __global_clear(struct intel_migrate *migrate,
{
return intel_context_migrate_clear(migrate->context, NULL,
obj->mm.pages->sgl,
obj->cache_level,
obj->pat_index,
i915_gem_object_is_lmem(obj),
value, out);
}
@ -648,7 +648,7 @@ static int live_emit_pte_full_ring(void *arg)
*/
pr_info("%s emite_pte ring space=%u\n", __func__, rq->ring->space);
it = sg_sgt(obj->mm.pages->sgl);
len = emit_pte(rq, &it, obj->cache_level, false, 0, CHUNK_SZ);
len = emit_pte(rq, &it, obj->pat_index, false, 0, CHUNK_SZ);
if (!len) {
err = -EINVAL;
goto out_rq;
@ -844,7 +844,7 @@ static int wrap_ktime_compare(const void *A, const void *B)
static int __perf_clear_blt(struct intel_context *ce,
struct scatterlist *sg,
enum i915_cache_level cache_level,
unsigned int pat_index,
bool is_lmem,
size_t sz)
{
@ -858,7 +858,7 @@ static int __perf_clear_blt(struct intel_context *ce,
t0 = ktime_get();
err = intel_context_migrate_clear(ce, NULL, sg, cache_level,
err = intel_context_migrate_clear(ce, NULL, sg, pat_index,
is_lmem, 0, &rq);
if (rq) {
if (i915_request_wait(rq, 0, MAX_SCHEDULE_TIMEOUT) < 0)
@ -904,7 +904,8 @@ static int perf_clear_blt(void *arg)
err = __perf_clear_blt(gt->migrate.context,
dst->mm.pages->sgl,
I915_CACHE_NONE,
i915_gem_get_pat_index(gt->i915,
I915_CACHE_NONE),
i915_gem_object_is_lmem(dst),
sizes[i]);
@ -919,10 +920,10 @@ static int perf_clear_blt(void *arg)
static int __perf_copy_blt(struct intel_context *ce,
struct scatterlist *src,
enum i915_cache_level src_cache_level,
unsigned int src_pat_index,
bool src_is_lmem,
struct scatterlist *dst,
enum i915_cache_level dst_cache_level,
unsigned int dst_pat_index,
bool dst_is_lmem,
size_t sz)
{
@ -937,9 +938,9 @@ static int __perf_copy_blt(struct intel_context *ce,
t0 = ktime_get();
err = intel_context_migrate_copy(ce, NULL,
src, src_cache_level,
src, src_pat_index,
src_is_lmem,
dst, dst_cache_level,
dst, dst_pat_index,
dst_is_lmem,
&rq);
if (rq) {
@ -994,10 +995,12 @@ static int perf_copy_blt(void *arg)
err = __perf_copy_blt(gt->migrate.context,
src->mm.pages->sgl,
I915_CACHE_NONE,
i915_gem_get_pat_index(gt->i915,
I915_CACHE_NONE),
i915_gem_object_is_lmem(src),
dst->mm.pages->sgl,
I915_CACHE_NONE,
i915_gem_get_pat_index(gt->i915,
I915_CACHE_NONE),
i915_gem_object_is_lmem(dst),
sz);

View File

@ -131,13 +131,14 @@ static int read_mocs_table(struct i915_request *rq,
const struct drm_i915_mocs_table *table,
u32 *offset)
{
struct intel_gt *gt = rq->engine->gt;
u32 addr;
if (!table)
return 0;
if (HAS_GLOBAL_MOCS_REGISTERS(rq->engine->i915))
addr = global_mocs_offset();
addr = global_mocs_offset() + gt->uncore->gsi_offset;
else
addr = mocs_offset(rq->engine);

View File

@ -86,7 +86,9 @@ __igt_reset_stolen(struct intel_gt *gt,
ggtt->vm.insert_page(&ggtt->vm, dma,
ggtt->error_capture.start,
I915_CACHE_NONE, 0);
i915_gem_get_pat_index(gt->i915,
I915_CACHE_NONE),
0);
mb();
s = io_mapping_map_wc(&ggtt->iomap,
@ -127,7 +129,9 @@ __igt_reset_stolen(struct intel_gt *gt,
ggtt->vm.insert_page(&ggtt->vm, dma,
ggtt->error_capture.start,
I915_CACHE_NONE, 0);
i915_gem_get_pat_index(gt->i915,
I915_CACHE_NONE),
0);
mb();
s = io_mapping_map_wc(&ggtt->iomap,

View File

@ -70,6 +70,31 @@ static int slpc_set_freq(struct intel_gt *gt, u32 freq)
return err;
}
static int slpc_restore_freq(struct intel_guc_slpc *slpc, u32 min, u32 max)
{
int err;
err = slpc_set_max_freq(slpc, max);
if (err) {
pr_err("Unable to restore max freq");
return err;
}
err = slpc_set_min_freq(slpc, min);
if (err) {
pr_err("Unable to restore min freq");
return err;
}
err = intel_guc_slpc_set_ignore_eff_freq(slpc, false);
if (err) {
pr_err("Unable to restore efficient freq");
return err;
}
return 0;
}
static u64 measure_power_at_freq(struct intel_gt *gt, int *freq, u64 *power)
{
int err = 0;
@ -268,8 +293,7 @@ static int run_test(struct intel_gt *gt, int test_type)
/*
* Set min frequency to RPn so that we can test the whole
* range of RPn-RP0. This also turns off efficient freq
* usage and makes results more predictable.
* range of RPn-RP0.
*/
err = slpc_set_min_freq(slpc, slpc->min_freq);
if (err) {
@ -277,6 +301,15 @@ static int run_test(struct intel_gt *gt, int test_type)
return err;
}
/*
* Turn off efficient frequency so RPn/RP0 ranges are obeyed.
*/
err = intel_guc_slpc_set_ignore_eff_freq(slpc, true);
if (err) {
pr_err("Unable to turn off efficient freq!");
return err;
}
intel_gt_pm_wait_for_idle(gt);
intel_gt_pm_get(gt);
for_each_engine(engine, gt, id) {
@ -358,9 +391,8 @@ static int run_test(struct intel_gt *gt, int test_type)
break;
}
/* Restore min/max frequencies */
slpc_set_max_freq(slpc, slpc_max_freq);
slpc_set_min_freq(slpc, slpc_min_freq);
/* Restore min/max/efficient frequencies */
err = slpc_restore_freq(slpc, slpc_min_freq, slpc_max_freq);
if (igt_flush_test(gt->i915))
err = -EIO;

View File

@ -836,7 +836,7 @@ static int setup_watcher(struct hwsp_watcher *w, struct intel_gt *gt,
return PTR_ERR(obj);
/* keep the same cache settings as timeline */
i915_gem_object_set_cache_coherency(obj, tl->hwsp_ggtt->obj->cache_level);
i915_gem_object_set_pat_index(obj, tl->hwsp_ggtt->obj->pat_index);
w->map = i915_gem_object_pin_map_unlocked(obj,
page_unmask_bits(tl->hwsp_ggtt->obj->mm.mapping));
if (IS_ERR(w->map)) {

View File

@ -36,6 +36,8 @@ pte_tlbinv(struct intel_context *ce,
u64 length,
struct rnd_state *prng)
{
const unsigned int pat_index =
i915_gem_get_pat_index(ce->vm->i915, I915_CACHE_NONE);
struct drm_i915_gem_object *batch;
struct drm_mm_node vb_node;
struct i915_request *rq;
@ -155,7 +157,7 @@ pte_tlbinv(struct intel_context *ce,
/* Flip the PTE between A and B */
if (i915_gem_object_is_lmem(vb->obj))
pte_flags |= PTE_LM;
ce->vm->insert_entries(ce->vm, &vb_res, 0, pte_flags);
ce->vm->insert_entries(ce->vm, &vb_res, pat_index, pte_flags);
/* Flush the PTE update to concurrent HW */
tlbinv(ce->vm, addr & -length, length);

View File

@ -44,6 +44,7 @@ enum intel_guc_load_status {
enum intel_bootrom_load_status {
INTEL_BOOTROM_STATUS_NO_KEY_FOUND = 0x13,
INTEL_BOOTROM_STATUS_AES_PROD_KEY_FOUND = 0x1A,
INTEL_BOOTROM_STATUS_PROD_KEY_CHECK_FAILURE = 0x2B,
INTEL_BOOTROM_STATUS_RSA_FAILED = 0x50,
INTEL_BOOTROM_STATUS_PAVPC_FAILED = 0x73,
INTEL_BOOTROM_STATUS_WOPCM_FAILED = 0x74,

View File

@ -12,7 +12,7 @@
struct intel_guc;
struct file;
/**
/*
* struct __guc_capture_bufstate
*
* Book-keeping structure used to track read and write pointers
@ -26,7 +26,7 @@ struct __guc_capture_bufstate {
u32 wr;
};
/**
/*
* struct __guc_capture_parsed_output - extracted error capture node
*
* A single unit of extracted error-capture output data grouped together
@ -58,7 +58,7 @@ struct __guc_capture_parsed_output {
#define GCAP_PARSED_REGLIST_INDEX_ENGINST BIT(GUC_CAPTURE_LIST_TYPE_ENGINE_INSTANCE)
};
/**
/*
* struct guc_debug_capture_list_header / struct guc_debug_capture_list
*
* As part of ADS registration, these header structures (followed by
@ -76,7 +76,7 @@ struct guc_debug_capture_list {
struct guc_mmio_reg regs[];
} __packed;
/**
/*
* struct __guc_mmio_reg_descr / struct __guc_mmio_reg_descr_group
*
* intel_guc_capture module uses these structures to maintain static
@ -101,7 +101,7 @@ struct __guc_mmio_reg_descr_group {
struct __guc_mmio_reg_descr *extlist; /* only used for steered registers */
};
/**
/*
* struct guc_state_capture_header_t / struct guc_state_capture_t /
* guc_state_capture_group_header_t / guc_state_capture_group_t
*
@ -148,7 +148,7 @@ struct guc_state_capture_group_t {
struct guc_state_capture_t capture_entries[];
} __packed;
/**
/*
* struct __guc_capture_ads_cache
*
* A structure to cache register lists that were populated and registered
@ -187,6 +187,10 @@ struct intel_guc_state_capture {
struct __guc_capture_ads_cache ads_cache[GUC_CAPTURE_LIST_INDEX_MAX]
[GUC_CAPTURE_LIST_TYPE_MAX]
[GUC_MAX_ENGINE_CLASSES];
/**
* @ads_null_cache: ADS null cache.
*/
void *ads_null_cache;
/**
@ -202,6 +206,10 @@ struct intel_guc_state_capture {
struct list_head cachelist;
#define PREALLOC_NODES_MAX_COUNT (3 * GUC_MAX_ENGINE_CLASSES * GUC_MAX_INSTANCES_PER_CLASS)
#define PREALLOC_NODES_DEFAULT_NUMREGS 64
/**
* @max_mmio_per_node: Max MMIO per node.
*/
int max_mmio_per_node;
/**

View File

@ -13,6 +13,7 @@
#define GSC_FW_STATUS_REG _MMIO(0x116C40)
#define GSC_FW_CURRENT_STATE REG_GENMASK(3, 0)
#define GSC_FW_CURRENT_STATE_RESET 0
#define GSC_FW_PROXY_STATE_NORMAL 5
#define GSC_FW_INIT_COMPLETE_BIT REG_BIT(9)
static bool gsc_is_in_reset(struct intel_uncore *uncore)
@ -23,6 +24,15 @@ static bool gsc_is_in_reset(struct intel_uncore *uncore)
GSC_FW_CURRENT_STATE_RESET;
}
bool intel_gsc_uc_fw_proxy_init_done(struct intel_gsc_uc *gsc)
{
struct intel_uncore *uncore = gsc_uc_to_gt(gsc)->uncore;
u32 fw_status = intel_uncore_read(uncore, GSC_FW_STATUS_REG);
return REG_FIELD_GET(GSC_FW_CURRENT_STATE, fw_status) ==
GSC_FW_PROXY_STATE_NORMAL;
}
bool intel_gsc_uc_fw_init_done(struct intel_gsc_uc *gsc)
{
struct intel_uncore *uncore = gsc_uc_to_gt(gsc)->uncore;
@ -110,6 +120,13 @@ static int gsc_fw_load_prepare(struct intel_gsc_uc *gsc)
if (obj->base.size < gsc->fw.size)
return -ENOSPC;
/*
* Wa_22016122933: For MTL the shared memory needs to be mapped
* as WC on CPU side and UC (PAT index 2) on GPU side
*/
if (IS_METEORLAKE(i915))
i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
dst = i915_gem_object_pin_map_unlocked(obj,
i915_coherent_map_type(i915, obj, true));
if (IS_ERR(dst))
@ -125,6 +142,12 @@ static int gsc_fw_load_prepare(struct intel_gsc_uc *gsc)
memset(dst, 0, obj->base.size);
memcpy(dst, src, gsc->fw.size);
/*
* Wa_22016122933: Making sure the data in dst is
* visible to GSC right away
*/
intel_guc_write_barrier(&gt->uc.guc);
i915_gem_object_unpin_map(gsc->fw.obj);
i915_gem_object_unpin_map(obj);

View File

@ -13,5 +13,6 @@ struct intel_uncore;
int intel_gsc_uc_fw_upload(struct intel_gsc_uc *gsc);
bool intel_gsc_uc_fw_init_done(struct intel_gsc_uc *gsc);
bool intel_gsc_uc_fw_proxy_init_done(struct intel_gsc_uc *gsc);
#endif

View File

@ -0,0 +1,424 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2023 Intel Corporation
*/
#include <linux/component.h>
#include "drm/i915_component.h"
#include "drm/i915_gsc_proxy_mei_interface.h"
#include "gt/intel_gt.h"
#include "gt/intel_gt_print.h"
#include "intel_gsc_proxy.h"
#include "intel_gsc_uc.h"
#include "intel_gsc_uc_heci_cmd_submit.h"
#include "i915_drv.h"
#include "i915_reg.h"
/*
* GSC proxy:
* The GSC uC needs to communicate with the CSME to perform certain operations.
* Since the GSC can't perform this communication directly on platforms where it
* is integrated in GT, i915 needs to transfer the messages from GSC to CSME
* and back. i915 must manually start the proxy flow after the GSC is loaded to
* signal to GSC that we're ready to handle its messages and allow it to query
* its init data from CSME; GSC will then trigger an HECI2 interrupt if it needs
* to send messages to CSME again.
* The proxy flow is as follow:
* 1 - i915 submits a request to GSC asking for the message to CSME
* 2 - GSC replies with the proxy header + payload for CSME
* 3 - i915 sends the reply from GSC as-is to CSME via the mei proxy component
* 4 - CSME replies with the proxy header + payload for GSC
* 5 - i915 submits a request to GSC with the reply from CSME
* 6 - GSC replies either with a new header + payload (same as step 2, so we
* restart from there) or with an end message.
*/
/*
* The component should load quite quickly in most cases, but it could take
* a bit. Using a very big timeout just to cover the worst case scenario
*/
#define GSC_PROXY_INIT_TIMEOUT_MS 20000
/* the protocol supports up to 32K in each direction */
#define GSC_PROXY_BUFFER_SIZE SZ_32K
#define GSC_PROXY_CHANNEL_SIZE (GSC_PROXY_BUFFER_SIZE * 2)
#define GSC_PROXY_MAX_MSG_SIZE (GSC_PROXY_BUFFER_SIZE - sizeof(struct intel_gsc_mtl_header))
/* FW-defined proxy header */
struct intel_gsc_proxy_header {
/*
* hdr:
* Bits 0-7: type of the proxy message (see enum intel_gsc_proxy_type)
* Bits 8-15: rsvd
* Bits 16-31: length in bytes of the payload following the proxy header
*/
u32 hdr;
#define GSC_PROXY_TYPE GENMASK(7, 0)
#define GSC_PROXY_PAYLOAD_LENGTH GENMASK(31, 16)
u32 source; /* Source of the Proxy message */
u32 destination; /* Destination of the Proxy message */
#define GSC_PROXY_ADDRESSING_KMD 0x10000
#define GSC_PROXY_ADDRESSING_GSC 0x20000
#define GSC_PROXY_ADDRESSING_CSME 0x30000
u32 status; /* Command status */
} __packed;
/* FW-defined proxy types */
enum intel_gsc_proxy_type {
GSC_PROXY_MSG_TYPE_PROXY_INVALID = 0,
GSC_PROXY_MSG_TYPE_PROXY_QUERY = 1,
GSC_PROXY_MSG_TYPE_PROXY_PAYLOAD = 2,
GSC_PROXY_MSG_TYPE_PROXY_END = 3,
GSC_PROXY_MSG_TYPE_PROXY_NOTIFICATION = 4,
};
struct gsc_proxy_msg {
struct intel_gsc_mtl_header header;
struct intel_gsc_proxy_header proxy_header;
} __packed;
static int proxy_send_to_csme(struct intel_gsc_uc *gsc)
{
struct intel_gt *gt = gsc_uc_to_gt(gsc);
struct i915_gsc_proxy_component *comp = gsc->proxy.component;
struct intel_gsc_mtl_header *hdr;
void *in = gsc->proxy.to_csme;
void *out = gsc->proxy.to_gsc;
u32 in_size;
int ret;
/* CSME msg only includes the proxy */
hdr = in;
in += sizeof(struct intel_gsc_mtl_header);
out += sizeof(struct intel_gsc_mtl_header);
in_size = hdr->message_size - sizeof(struct intel_gsc_mtl_header);
/* the message must contain at least the proxy header */
if (in_size < sizeof(struct intel_gsc_proxy_header) ||
in_size > GSC_PROXY_MAX_MSG_SIZE) {
gt_err(gt, "Invalid CSME message size: %u\n", in_size);
return -EINVAL;
}
ret = comp->ops->send(comp->mei_dev, in, in_size);
if (ret < 0) {
gt_err(gt, "Failed to send CSME message\n");
return ret;
}
ret = comp->ops->recv(comp->mei_dev, out, GSC_PROXY_MAX_MSG_SIZE);
if (ret < 0) {
gt_err(gt, "Failed to receive CSME message\n");
return ret;
}
return ret;
}
static int proxy_send_to_gsc(struct intel_gsc_uc *gsc)
{
struct intel_gt *gt = gsc_uc_to_gt(gsc);
u32 *marker = gsc->proxy.to_csme; /* first dw of the reply header */
u64 addr_in = i915_ggtt_offset(gsc->proxy.vma);
u64 addr_out = addr_in + GSC_PROXY_BUFFER_SIZE;
u32 size = ((struct gsc_proxy_msg *)gsc->proxy.to_gsc)->header.message_size;
int err;
/* the message must contain at least the gsc and proxy headers */
if (size < sizeof(struct gsc_proxy_msg) || size > GSC_PROXY_BUFFER_SIZE) {
gt_err(gt, "Invalid GSC proxy message size: %u\n", size);
return -EINVAL;
}
/* clear the message marker */
*marker = 0;
/* make sure the marker write is flushed */
wmb();
/* send the request */
err = intel_gsc_uc_heci_cmd_submit_packet(gsc, addr_in, size,
addr_out, GSC_PROXY_BUFFER_SIZE);
if (!err) {
/* wait for the reply to show up */
err = wait_for(*marker != 0, 300);
if (err)
gt_err(gt, "Failed to get a proxy reply from gsc\n");
}
return err;
}
static int validate_proxy_header(struct intel_gsc_proxy_header *header,
u32 source, u32 dest)
{
u32 type = FIELD_GET(GSC_PROXY_TYPE, header->hdr);
u32 length = FIELD_GET(GSC_PROXY_PAYLOAD_LENGTH, header->hdr);
int ret = 0;
if (header->destination != dest || header->source != source) {
ret = -ENOEXEC;
goto fail;
}
switch (type) {
case GSC_PROXY_MSG_TYPE_PROXY_PAYLOAD:
if (length > 0)
break;
fallthrough;
case GSC_PROXY_MSG_TYPE_PROXY_INVALID:
ret = -EIO;
goto fail;
default:
break;
}
fail:
return ret;
}
static int proxy_query(struct intel_gsc_uc *gsc)
{
struct intel_gt *gt = gsc_uc_to_gt(gsc);
struct gsc_proxy_msg *to_gsc = gsc->proxy.to_gsc;
struct gsc_proxy_msg *to_csme = gsc->proxy.to_csme;
int ret;
intel_gsc_uc_heci_cmd_emit_mtl_header(&to_gsc->header,
HECI_MEADDRESS_PROXY,
sizeof(struct gsc_proxy_msg),
0);
to_gsc->proxy_header.hdr =
FIELD_PREP(GSC_PROXY_TYPE, GSC_PROXY_MSG_TYPE_PROXY_QUERY) |
FIELD_PREP(GSC_PROXY_PAYLOAD_LENGTH, 0);
to_gsc->proxy_header.source = GSC_PROXY_ADDRESSING_KMD;
to_gsc->proxy_header.destination = GSC_PROXY_ADDRESSING_GSC;
to_gsc->proxy_header.status = 0;
while (1) {
/* clear the GSC response header space */
memset(gsc->proxy.to_csme, 0, sizeof(struct gsc_proxy_msg));
/* send proxy message to GSC */
ret = proxy_send_to_gsc(gsc);
if (ret) {
gt_err(gt, "failed to send proxy message to GSC! %d\n", ret);
goto proxy_error;
}
/* stop if this was the last message */
if (FIELD_GET(GSC_PROXY_TYPE, to_csme->proxy_header.hdr) ==
GSC_PROXY_MSG_TYPE_PROXY_END)
break;
/* make sure the GSC-to-CSME proxy header is sane */
ret = validate_proxy_header(&to_csme->proxy_header,
GSC_PROXY_ADDRESSING_GSC,
GSC_PROXY_ADDRESSING_CSME);
if (ret) {
gt_err(gt, "invalid GSC to CSME proxy header! %d\n", ret);
goto proxy_error;
}
/* send the GSC message to the CSME */
ret = proxy_send_to_csme(gsc);
if (ret < 0) {
gt_err(gt, "failed to send proxy message to CSME! %d\n", ret);
goto proxy_error;
}
/* update the GSC message size with the returned value from CSME */
to_gsc->header.message_size = ret + sizeof(struct intel_gsc_mtl_header);
/* make sure the CSME-to-GSC proxy header is sane */
ret = validate_proxy_header(&to_gsc->proxy_header,
GSC_PROXY_ADDRESSING_CSME,
GSC_PROXY_ADDRESSING_GSC);
if (ret) {
gt_err(gt, "invalid CSME to GSC proxy header! %d\n", ret);
goto proxy_error;
}
}
proxy_error:
return ret < 0 ? ret : 0;
}
int intel_gsc_proxy_request_handler(struct intel_gsc_uc *gsc)
{
struct intel_gt *gt = gsc_uc_to_gt(gsc);
int err;
if (!gsc->proxy.component_added)
return -ENODEV;
assert_rpm_wakelock_held(gt->uncore->rpm);
/* when GSC is loaded, we can queue this before the component is bound */
err = wait_for(gsc->proxy.component, GSC_PROXY_INIT_TIMEOUT_MS);
if (err) {
gt_err(gt, "GSC proxy component didn't bind within the expected timeout\n");
return -EIO;
}
mutex_lock(&gsc->proxy.mutex);
if (!gsc->proxy.component) {
gt_err(gt, "GSC proxy worker called without the component being bound!\n");
err = -EIO;
} else {
/*
* write the status bit to clear it and allow new proxy
* interrupts to be generated while we handle the current
* request, but be sure not to write the reset bit
*/
intel_uncore_rmw(gt->uncore, HECI_H_CSR(MTL_GSC_HECI2_BASE),
HECI_H_CSR_RST, HECI_H_CSR_IS);
err = proxy_query(gsc);
}
mutex_unlock(&gsc->proxy.mutex);
return err;
}
void intel_gsc_proxy_irq_handler(struct intel_gsc_uc *gsc, u32 iir)
{
struct intel_gt *gt = gsc_uc_to_gt(gsc);
if (unlikely(!iir))
return;
lockdep_assert_held(gt->irq_lock);
if (!gsc->proxy.component) {
gt_err(gt, "GSC proxy irq received without the component being bound!\n");
return;
}
gsc->gsc_work_actions |= GSC_ACTION_SW_PROXY;
queue_work(gsc->wq, &gsc->work);
}
static int i915_gsc_proxy_component_bind(struct device *i915_kdev,
struct device *mei_kdev, void *data)
{
struct drm_i915_private *i915 = kdev_to_i915(i915_kdev);
struct intel_gt *gt = i915->media_gt;
struct intel_gsc_uc *gsc = &gt->uc.gsc;
intel_wakeref_t wakeref;
/* enable HECI2 IRQs */
with_intel_runtime_pm(&i915->runtime_pm, wakeref)
intel_uncore_rmw(gt->uncore, HECI_H_CSR(MTL_GSC_HECI2_BASE),
HECI_H_CSR_RST, HECI_H_CSR_IE);
mutex_lock(&gsc->proxy.mutex);
gsc->proxy.component = data;
gsc->proxy.component->mei_dev = mei_kdev;
mutex_unlock(&gsc->proxy.mutex);
return 0;
}
static void i915_gsc_proxy_component_unbind(struct device *i915_kdev,
struct device *mei_kdev, void *data)
{
struct drm_i915_private *i915 = kdev_to_i915(i915_kdev);
struct intel_gt *gt = i915->media_gt;
struct intel_gsc_uc *gsc = &gt->uc.gsc;
intel_wakeref_t wakeref;
mutex_lock(&gsc->proxy.mutex);
gsc->proxy.component = NULL;
mutex_unlock(&gsc->proxy.mutex);
/* disable HECI2 IRQs */
with_intel_runtime_pm(&i915->runtime_pm, wakeref)
intel_uncore_rmw(gt->uncore, HECI_H_CSR(MTL_GSC_HECI2_BASE),
HECI_H_CSR_IE | HECI_H_CSR_RST, 0);
}
static const struct component_ops i915_gsc_proxy_component_ops = {
.bind = i915_gsc_proxy_component_bind,
.unbind = i915_gsc_proxy_component_unbind,
};
static int proxy_channel_alloc(struct intel_gsc_uc *gsc)
{
struct intel_gt *gt = gsc_uc_to_gt(gsc);
struct i915_vma *vma;
void *vaddr;
int err;
err = intel_guc_allocate_and_map_vma(&gt->uc.guc, GSC_PROXY_CHANNEL_SIZE,
&vma, &vaddr);
if (err)
return err;
gsc->proxy.vma = vma;
gsc->proxy.to_gsc = vaddr;
gsc->proxy.to_csme = vaddr + GSC_PROXY_BUFFER_SIZE;
return 0;
}
static void proxy_channel_free(struct intel_gsc_uc *gsc)
{
if (!gsc->proxy.vma)
return;
gsc->proxy.to_gsc = NULL;
gsc->proxy.to_csme = NULL;
i915_vma_unpin_and_release(&gsc->proxy.vma, I915_VMA_RELEASE_MAP);
}
void intel_gsc_proxy_fini(struct intel_gsc_uc *gsc)
{
struct intel_gt *gt = gsc_uc_to_gt(gsc);
struct drm_i915_private *i915 = gt->i915;
if (fetch_and_zero(&gsc->proxy.component_added))
component_del(i915->drm.dev, &i915_gsc_proxy_component_ops);
proxy_channel_free(gsc);
}
int intel_gsc_proxy_init(struct intel_gsc_uc *gsc)
{
int err;
struct intel_gt *gt = gsc_uc_to_gt(gsc);
struct drm_i915_private *i915 = gt->i915;
mutex_init(&gsc->proxy.mutex);
if (!IS_ENABLED(CONFIG_INTEL_MEI_GSC_PROXY)) {
gt_info(gt, "can't init GSC proxy due to missing mei component\n");
return -ENODEV;
}
err = proxy_channel_alloc(gsc);
if (err)
return err;
err = component_add_typed(i915->drm.dev, &i915_gsc_proxy_component_ops,
I915_COMPONENT_GSC_PROXY);
if (err < 0) {
gt_err(gt, "Failed to add GSC_PROXY component (%d)\n", err);
goto out_free;
}
gsc->proxy.component_added = true;
return 0;
out_free:
proxy_channel_free(gsc);
return err;
}

View File

@ -0,0 +1,18 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2023 Intel Corporation
*/
#ifndef _INTEL_GSC_PROXY_H_
#define _INTEL_GSC_PROXY_H_
#include <linux/types.h>
struct intel_gsc_uc;
int intel_gsc_proxy_init(struct intel_gsc_uc *gsc);
void intel_gsc_proxy_fini(struct intel_gsc_uc *gsc);
int intel_gsc_proxy_request_handler(struct intel_gsc_uc *gsc);
void intel_gsc_proxy_irq_handler(struct intel_gsc_uc *gsc, u32 iir);
#endif

View File

@ -10,15 +10,60 @@
#include "intel_gsc_uc.h"
#include "intel_gsc_fw.h"
#include "i915_drv.h"
#include "intel_gsc_proxy.h"
static void gsc_work(struct work_struct *work)
{
struct intel_gsc_uc *gsc = container_of(work, typeof(*gsc), work);
struct intel_gt *gt = gsc_uc_to_gt(gsc);
intel_wakeref_t wakeref;
u32 actions;
int ret;
with_intel_runtime_pm(gt->uncore->rpm, wakeref)
intel_gsc_uc_fw_upload(gsc);
wakeref = intel_runtime_pm_get(gt->uncore->rpm);
spin_lock_irq(gt->irq_lock);
actions = gsc->gsc_work_actions;
gsc->gsc_work_actions = 0;
spin_unlock_irq(gt->irq_lock);
if (actions & GSC_ACTION_FW_LOAD) {
ret = intel_gsc_uc_fw_upload(gsc);
if (ret == -EEXIST) /* skip proxy if not a new load */
actions &= ~GSC_ACTION_FW_LOAD;
else if (ret)
goto out_put;
}
if (actions & (GSC_ACTION_FW_LOAD | GSC_ACTION_SW_PROXY)) {
if (!intel_gsc_uc_fw_init_done(gsc)) {
gt_err(gt, "Proxy request received with GSC not loaded!\n");
goto out_put;
}
ret = intel_gsc_proxy_request_handler(gsc);
if (ret)
goto out_put;
/* mark the GSC FW init as done the first time we run this */
if (actions & GSC_ACTION_FW_LOAD) {
/*
* If there is a proxy establishment error, the GSC might still
* complete the request handling cleanly, so we need to check the
* status register to check if the proxy init was actually successful
*/
if (intel_gsc_uc_fw_proxy_init_done(gsc)) {
drm_dbg(&gt->i915->drm, "GSC Proxy initialized\n");
intel_uc_fw_change_status(&gsc->fw, INTEL_UC_FIRMWARE_RUNNING);
} else {
drm_err(&gt->i915->drm,
"GSC status reports proxy init not complete\n");
}
}
}
out_put:
intel_runtime_pm_put(gt->uncore->rpm, wakeref);
}
static bool gsc_engine_supported(struct intel_gt *gt)
@ -43,6 +88,8 @@ static bool gsc_engine_supported(struct intel_gt *gt)
void intel_gsc_uc_init_early(struct intel_gsc_uc *gsc)
{
struct intel_gt *gt = gsc_uc_to_gt(gsc);
intel_uc_fw_init_early(&gsc->fw, INTEL_UC_FW_TYPE_GSC);
INIT_WORK(&gsc->work, gsc_work);
@ -50,10 +97,16 @@ void intel_gsc_uc_init_early(struct intel_gsc_uc *gsc)
* GT with it being not fully setup hence check device info's
* engine mask
*/
if (!gsc_engine_supported(gsc_uc_to_gt(gsc))) {
if (!gsc_engine_supported(gt)) {
intel_uc_fw_change_status(&gsc->fw, INTEL_UC_FIRMWARE_NOT_SUPPORTED);
return;
}
gsc->wq = alloc_ordered_workqueue("i915_gsc", 0);
if (!gsc->wq) {
gt_err(gt, "failed to allocate WQ for GSC, disabling FW\n");
intel_uc_fw_change_status(&gsc->fw, INTEL_UC_FIRMWARE_NOT_SUPPORTED);
}
}
int intel_gsc_uc_init(struct intel_gsc_uc *gsc)
@ -88,6 +141,9 @@ int intel_gsc_uc_init(struct intel_gsc_uc *gsc)
gsc->ce = ce;
/* if we fail to init proxy we still want to load GSC for PM */
intel_gsc_proxy_init(gsc);
intel_uc_fw_change_status(&gsc->fw, INTEL_UC_FIRMWARE_LOADABLE);
return 0;
@ -107,6 +163,12 @@ void intel_gsc_uc_fini(struct intel_gsc_uc *gsc)
return;
flush_work(&gsc->work);
if (gsc->wq) {
destroy_workqueue(gsc->wq);
gsc->wq = NULL;
}
intel_gsc_proxy_fini(gsc);
if (gsc->ce)
intel_engine_destroy_pinned_context(fetch_and_zero(&gsc->ce));
@ -145,11 +207,17 @@ void intel_gsc_uc_resume(struct intel_gsc_uc *gsc)
void intel_gsc_uc_load_start(struct intel_gsc_uc *gsc)
{
struct intel_gt *gt = gsc_uc_to_gt(gsc);
if (!intel_uc_fw_is_loadable(&gsc->fw))
return;
if (intel_gsc_uc_fw_init_done(gsc))
return;
queue_work(system_unbound_wq, &gsc->work);
spin_lock_irq(gt->irq_lock);
gsc->gsc_work_actions |= GSC_ACTION_FW_LOAD;
spin_unlock_irq(gt->irq_lock);
queue_work(gsc->wq, &gsc->work);
}

View File

@ -10,6 +10,7 @@
struct i915_vma;
struct intel_context;
struct i915_gsc_proxy_component;
struct intel_gsc_uc {
/* Generic uC firmware management */
@ -19,7 +20,21 @@ struct intel_gsc_uc {
struct i915_vma *local; /* private memory for GSC usage */
struct intel_context *ce; /* for submission to GSC FW via GSC engine */
struct work_struct work; /* for delayed load */
/* for delayed load and proxy handling */
struct workqueue_struct *wq;
struct work_struct work;
u32 gsc_work_actions; /* protected by gt->irq_lock */
#define GSC_ACTION_FW_LOAD BIT(0)
#define GSC_ACTION_SW_PROXY BIT(1)
struct {
struct i915_gsc_proxy_component *component;
bool component_added;
struct i915_vma *vma;
void *to_gsc;
void *to_csme;
struct mutex mutex; /* protects the tee channel binding */
} proxy;
};
void intel_gsc_uc_init_early(struct intel_gsc_uc *gsc);

View File

@ -3,6 +3,7 @@
* Copyright © 2023 Intel Corporation
*/
#include "gt/intel_context.h"
#include "gt/intel_engine_pm.h"
#include "gt/intel_gpu_commands.h"
#include "gt/intel_gt.h"
@ -107,3 +108,104 @@ void intel_gsc_uc_heci_cmd_emit_mtl_header(struct intel_gsc_mtl_header *header,
header->header_version = MTL_GSC_HEADER_VERSION;
header->message_size = message_size;
}
static void
emit_gsc_heci_pkt_nonpriv(u32 *cmd, struct intel_gsc_heci_non_priv_pkt *pkt)
{
*cmd++ = GSC_HECI_CMD_PKT;
*cmd++ = lower_32_bits(pkt->addr_in);
*cmd++ = upper_32_bits(pkt->addr_in);
*cmd++ = pkt->size_in;
*cmd++ = lower_32_bits(pkt->addr_out);
*cmd++ = upper_32_bits(pkt->addr_out);
*cmd++ = pkt->size_out;
*cmd++ = 0;
*cmd++ = MI_BATCH_BUFFER_END;
}
int
intel_gsc_uc_heci_cmd_submit_nonpriv(struct intel_gsc_uc *gsc,
struct intel_context *ce,
struct intel_gsc_heci_non_priv_pkt *pkt,
u32 *cmd, int timeout_ms)
{
struct intel_engine_cs *engine;
struct i915_gem_ww_ctx ww;
struct i915_request *rq;
int err, trials = 0;
i915_gem_ww_ctx_init(&ww, false);
retry:
err = i915_gem_object_lock(pkt->bb_vma->obj, &ww);
if (err)
goto out_ww;
err = i915_gem_object_lock(pkt->heci_pkt_vma->obj, &ww);
if (err)
goto out_ww;
err = intel_context_pin_ww(ce, &ww);
if (err)
goto out_ww;
rq = i915_request_create(ce);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto out_unpin_ce;
}
emit_gsc_heci_pkt_nonpriv(cmd, pkt);
err = i915_vma_move_to_active(pkt->bb_vma, rq, 0);
if (err)
goto out_rq;
err = i915_vma_move_to_active(pkt->heci_pkt_vma, rq, EXEC_OBJECT_WRITE);
if (err)
goto out_rq;
engine = rq->context->engine;
if (engine->emit_init_breadcrumb) {
err = engine->emit_init_breadcrumb(rq);
if (err)
goto out_rq;
}
err = engine->emit_bb_start(rq, i915_vma_offset(pkt->bb_vma), PAGE_SIZE, 0);
if (err)
goto out_rq;
err = ce->engine->emit_flush(rq, 0);
if (err)
drm_err(&gsc_uc_to_gt(gsc)->i915->drm,
"Failed emit-flush for gsc-heci-non-priv-pkterr=%d\n", err);
out_rq:
i915_request_get(rq);
if (unlikely(err))
i915_request_set_error_once(rq, err);
i915_request_add(rq);
if (!err) {
if (i915_request_wait(rq, I915_WAIT_INTERRUPTIBLE,
msecs_to_jiffies(timeout_ms)) < 0)
err = -ETIME;
}
i915_request_put(rq);
out_unpin_ce:
intel_context_unpin(ce);
out_ww:
if (err == -EDEADLK) {
err = i915_gem_ww_ctx_backoff(&ww);
if (!err) {
if (++trials < 10)
goto retry;
else
err = EAGAIN;
}
}
i915_gem_ww_ctx_fini(&ww);
return err;
}

View File

@ -8,12 +8,16 @@
#include <linux/types.h>
struct i915_vma;
struct intel_context;
struct intel_gsc_uc;
struct intel_gsc_mtl_header {
u32 validity_marker;
#define GSC_HECI_VALIDITY_MARKER 0xA578875A
u8 heci_client_id;
#define HECI_MEADDRESS_PROXY 10
#define HECI_MEADDRESS_PXP 17
#define HECI_MEADDRESS_HDCP 18
@ -47,7 +51,8 @@ struct intel_gsc_mtl_header {
* we distinguish the flags using OUTFLAG or INFLAG
*/
u32 flags;
#define GSC_OUTFLAG_MSG_PENDING 1
#define GSC_OUTFLAG_MSG_PENDING BIT(0)
#define GSC_INFLAG_MSG_CLEANUP BIT(1)
u32 status;
} __packed;
@ -58,4 +63,24 @@ int intel_gsc_uc_heci_cmd_submit_packet(struct intel_gsc_uc *gsc,
void intel_gsc_uc_heci_cmd_emit_mtl_header(struct intel_gsc_mtl_header *header,
u8 heci_client_id, u32 message_size,
u64 host_session_id);
struct intel_gsc_heci_non_priv_pkt {
u64 addr_in;
u32 size_in;
u64 addr_out;
u32 size_out;
struct i915_vma *heci_pkt_vma;
struct i915_vma *bb_vma;
};
void
intel_gsc_uc_heci_cmd_emit_mtl_header(struct intel_gsc_mtl_header *header,
u8 heci_client_id, u32 msg_size,
u64 host_session_id);
int
intel_gsc_uc_heci_cmd_submit_nonpriv(struct intel_gsc_uc *gsc,
struct intel_context *ce,
struct intel_gsc_heci_non_priv_pkt *pkt,
u32 *cs, int timeout_ms);
#endif

View File

@ -743,6 +743,13 @@ struct i915_vma *intel_guc_allocate_vma(struct intel_guc *guc, u32 size)
if (IS_ERR(obj))
return ERR_CAST(obj);
/*
* Wa_22016122933: For MTL the shared memory needs to be mapped
* as WC on CPU side and UC (PAT index 2) on GPU side
*/
if (IS_METEORLAKE(gt->i915))
i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
vma = i915_vma_instance(obj, &gt->ggtt->vm, NULL);
if (IS_ERR(vma))
goto err;

View File

@ -42,6 +42,7 @@ struct intel_guc {
/** @capture: the error-state-capture module's data and objects */
struct intel_guc_state_capture *capture;
/** @dbgfs_node: debugfs node */
struct dentry *dbgfs_node;
/** @sched_engine: Global engine used to submit requests to GuC */

View File

@ -643,6 +643,39 @@ static void guc_init_golden_context(struct intel_guc *guc)
GEM_BUG_ON(guc->ads_golden_ctxt_size != total_size);
}
static u32 guc_get_capture_engine_mask(struct iosys_map *info_map, u32 capture_class)
{
u32 mask;
switch (capture_class) {
case GUC_CAPTURE_LIST_CLASS_RENDER_COMPUTE:
mask = info_map_read(info_map, engine_enabled_masks[GUC_RENDER_CLASS]);
mask |= info_map_read(info_map, engine_enabled_masks[GUC_COMPUTE_CLASS]);
break;
case GUC_CAPTURE_LIST_CLASS_VIDEO:
mask = info_map_read(info_map, engine_enabled_masks[GUC_VIDEO_CLASS]);
break;
case GUC_CAPTURE_LIST_CLASS_VIDEOENHANCE:
mask = info_map_read(info_map, engine_enabled_masks[GUC_VIDEOENHANCE_CLASS]);
break;
case GUC_CAPTURE_LIST_CLASS_BLITTER:
mask = info_map_read(info_map, engine_enabled_masks[GUC_BLITTER_CLASS]);
break;
case GUC_CAPTURE_LIST_CLASS_GSC_OTHER:
mask = info_map_read(info_map, engine_enabled_masks[GUC_GSC_OTHER_CLASS]);
break;
default:
mask = 0;
}
return mask;
}
static int
guc_capture_prep_lists(struct intel_guc *guc)
{
@ -678,9 +711,10 @@ guc_capture_prep_lists(struct intel_guc *guc)
for (i = 0; i < GUC_CAPTURE_LIST_INDEX_MAX; i++) {
for (j = 0; j < GUC_MAX_ENGINE_CLASSES; j++) {
u32 engine_mask = guc_get_capture_engine_mask(&info_map, j);
/* null list if we dont have said engine or list */
if (!info_map_read(&info_map, engine_enabled_masks[j])) {
if (!engine_mask) {
if (ads_is_mapped) {
ads_blob_write(guc, ads.capture_class[i][j], null_ggtt);
ads_blob_write(guc, ads.capture_instance[i][j], null_ggtt);

View File

@ -30,12 +30,12 @@
#define COMMON_BASE_GLOBAL \
{ FORCEWAKE_MT, 0, 0, "FORCEWAKE" }
#define COMMON_GEN9BASE_GLOBAL \
#define COMMON_GEN8BASE_GLOBAL \
{ ERROR_GEN6, 0, 0, "ERROR_GEN6" }, \
{ DONE_REG, 0, 0, "DONE_REG" }, \
{ HSW_GTT_CACHE_EN, 0, 0, "HSW_GTT_CACHE_EN" }
#define GEN9_GLOBAL \
#define GEN8_GLOBAL \
{ GEN8_FAULT_TLB_DATA0, 0, 0, "GEN8_FAULT_TLB_DATA0" }, \
{ GEN8_FAULT_TLB_DATA1, 0, 0, "GEN8_FAULT_TLB_DATA1" }
@ -96,67 +96,65 @@
{ GEN12_SFC_DONE(2), 0, 0, "SFC_DONE[2]" }, \
{ GEN12_SFC_DONE(3), 0, 0, "SFC_DONE[3]" }
/* XE_LPD - Global */
static const struct __guc_mmio_reg_descr xe_lpd_global_regs[] = {
/* XE_LP Global */
static const struct __guc_mmio_reg_descr xe_lp_global_regs[] = {
COMMON_BASE_GLOBAL,
COMMON_GEN9BASE_GLOBAL,
COMMON_GEN8BASE_GLOBAL,
COMMON_GEN12BASE_GLOBAL,
};
/* XE_LPD - Render / Compute Per-Class */
static const struct __guc_mmio_reg_descr xe_lpd_rc_class_regs[] = {
/* XE_LP Render / Compute Per-Class */
static const struct __guc_mmio_reg_descr xe_lp_rc_class_regs[] = {
COMMON_BASE_HAS_EU,
COMMON_BASE_RENDER,
COMMON_GEN12BASE_RENDER,
};
/* GEN9/XE_LPD - Render / Compute Per-Engine-Instance */
static const struct __guc_mmio_reg_descr xe_lpd_rc_inst_regs[] = {
/* GEN8+ Render / Compute Per-Engine-Instance */
static const struct __guc_mmio_reg_descr gen8_rc_inst_regs[] = {
COMMON_BASE_ENGINE_INSTANCE,
};
/* GEN9/XE_LPD - Media Decode/Encode Per-Engine-Instance */
static const struct __guc_mmio_reg_descr xe_lpd_vd_inst_regs[] = {
/* GEN8+ Media Decode/Encode Per-Engine-Instance */
static const struct __guc_mmio_reg_descr gen8_vd_inst_regs[] = {
COMMON_BASE_ENGINE_INSTANCE,
};
/* XE_LPD - Video Enhancement Per-Class */
static const struct __guc_mmio_reg_descr xe_lpd_vec_class_regs[] = {
/* XE_LP Video Enhancement Per-Class */
static const struct __guc_mmio_reg_descr xe_lp_vec_class_regs[] = {
COMMON_GEN12BASE_VEC,
};
/* GEN9/XE_LPD - Video Enhancement Per-Engine-Instance */
static const struct __guc_mmio_reg_descr xe_lpd_vec_inst_regs[] = {
/* GEN8+ Video Enhancement Per-Engine-Instance */
static const struct __guc_mmio_reg_descr gen8_vec_inst_regs[] = {
COMMON_BASE_ENGINE_INSTANCE,
};
/* GEN9/XE_LPD - Blitter Per-Engine-Instance */
static const struct __guc_mmio_reg_descr xe_lpd_blt_inst_regs[] = {
/* GEN8+ Blitter Per-Engine-Instance */
static const struct __guc_mmio_reg_descr gen8_blt_inst_regs[] = {
COMMON_BASE_ENGINE_INSTANCE,
};
/* XE_LPD - GSC Per-Engine-Instance */
static const struct __guc_mmio_reg_descr xe_lpd_gsc_inst_regs[] = {
/* XE_LP - GSC Per-Engine-Instance */
static const struct __guc_mmio_reg_descr xe_lp_gsc_inst_regs[] = {
COMMON_BASE_ENGINE_INSTANCE,
};
/* GEN9 - Global */
static const struct __guc_mmio_reg_descr default_global_regs[] = {
/* GEN8 - Global */
static const struct __guc_mmio_reg_descr gen8_global_regs[] = {
COMMON_BASE_GLOBAL,
COMMON_GEN9BASE_GLOBAL,
GEN9_GLOBAL,
COMMON_GEN8BASE_GLOBAL,
GEN8_GLOBAL,
};
static const struct __guc_mmio_reg_descr default_rc_class_regs[] = {
static const struct __guc_mmio_reg_descr gen8_rc_class_regs[] = {
COMMON_BASE_HAS_EU,
COMMON_BASE_RENDER,
};
/*
* Empty lists:
* GEN9/XE_LPD - Blitter Per-Class
* GEN9/XE_LPD - Media Decode/Encode Per-Class
* GEN9 - VEC Class
* Empty list to prevent warnings about unknown class/instance types
* as not all class/instanace types have entries on all platforms.
*/
static const struct __guc_mmio_reg_descr empty_regs_list[] = {
};
@ -174,37 +172,33 @@ static const struct __guc_mmio_reg_descr empty_regs_list[] = {
}
/* List of lists */
static const struct __guc_mmio_reg_descr_group default_lists[] = {
MAKE_REGLIST(default_global_regs, PF, GLOBAL, 0),
MAKE_REGLIST(default_rc_class_regs, PF, ENGINE_CLASS, GUC_RENDER_CLASS),
MAKE_REGLIST(xe_lpd_rc_inst_regs, PF, ENGINE_INSTANCE, GUC_RENDER_CLASS),
MAKE_REGLIST(default_rc_class_regs, PF, ENGINE_CLASS, GUC_COMPUTE_CLASS),
MAKE_REGLIST(xe_lpd_rc_inst_regs, PF, ENGINE_INSTANCE, GUC_COMPUTE_CLASS),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_VIDEO_CLASS),
MAKE_REGLIST(xe_lpd_vd_inst_regs, PF, ENGINE_INSTANCE, GUC_VIDEO_CLASS),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_VIDEOENHANCE_CLASS),
MAKE_REGLIST(xe_lpd_vec_inst_regs, PF, ENGINE_INSTANCE, GUC_VIDEOENHANCE_CLASS),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_BLITTER_CLASS),
MAKE_REGLIST(xe_lpd_blt_inst_regs, PF, ENGINE_INSTANCE, GUC_BLITTER_CLASS),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_GSC_OTHER_CLASS),
MAKE_REGLIST(xe_lpd_gsc_inst_regs, PF, ENGINE_INSTANCE, GUC_GSC_OTHER_CLASS),
static const struct __guc_mmio_reg_descr_group gen8_lists[] = {
MAKE_REGLIST(gen8_global_regs, PF, GLOBAL, 0),
MAKE_REGLIST(gen8_rc_class_regs, PF, ENGINE_CLASS, GUC_CAPTURE_LIST_CLASS_RENDER_COMPUTE),
MAKE_REGLIST(gen8_rc_inst_regs, PF, ENGINE_INSTANCE, GUC_CAPTURE_LIST_CLASS_RENDER_COMPUTE),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_CAPTURE_LIST_CLASS_VIDEO),
MAKE_REGLIST(gen8_vd_inst_regs, PF, ENGINE_INSTANCE, GUC_CAPTURE_LIST_CLASS_VIDEO),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_CAPTURE_LIST_CLASS_VIDEOENHANCE),
MAKE_REGLIST(gen8_vec_inst_regs, PF, ENGINE_INSTANCE, GUC_CAPTURE_LIST_CLASS_VIDEOENHANCE),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_CAPTURE_LIST_CLASS_BLITTER),
MAKE_REGLIST(gen8_blt_inst_regs, PF, ENGINE_INSTANCE, GUC_CAPTURE_LIST_CLASS_BLITTER),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_CAPTURE_LIST_CLASS_GSC_OTHER),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_INSTANCE, GUC_CAPTURE_LIST_CLASS_GSC_OTHER),
{}
};
static const struct __guc_mmio_reg_descr_group xe_lpd_lists[] = {
MAKE_REGLIST(xe_lpd_global_regs, PF, GLOBAL, 0),
MAKE_REGLIST(xe_lpd_rc_class_regs, PF, ENGINE_CLASS, GUC_RENDER_CLASS),
MAKE_REGLIST(xe_lpd_rc_inst_regs, PF, ENGINE_INSTANCE, GUC_RENDER_CLASS),
MAKE_REGLIST(xe_lpd_rc_class_regs, PF, ENGINE_CLASS, GUC_COMPUTE_CLASS),
MAKE_REGLIST(xe_lpd_rc_inst_regs, PF, ENGINE_INSTANCE, GUC_COMPUTE_CLASS),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_VIDEO_CLASS),
MAKE_REGLIST(xe_lpd_vd_inst_regs, PF, ENGINE_INSTANCE, GUC_VIDEO_CLASS),
MAKE_REGLIST(xe_lpd_vec_class_regs, PF, ENGINE_CLASS, GUC_VIDEOENHANCE_CLASS),
MAKE_REGLIST(xe_lpd_vec_inst_regs, PF, ENGINE_INSTANCE, GUC_VIDEOENHANCE_CLASS),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_BLITTER_CLASS),
MAKE_REGLIST(xe_lpd_blt_inst_regs, PF, ENGINE_INSTANCE, GUC_BLITTER_CLASS),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_GSC_OTHER_CLASS),
MAKE_REGLIST(xe_lpd_gsc_inst_regs, PF, ENGINE_INSTANCE, GUC_GSC_OTHER_CLASS),
static const struct __guc_mmio_reg_descr_group xe_lp_lists[] = {
MAKE_REGLIST(xe_lp_global_regs, PF, GLOBAL, 0),
MAKE_REGLIST(xe_lp_rc_class_regs, PF, ENGINE_CLASS, GUC_CAPTURE_LIST_CLASS_RENDER_COMPUTE),
MAKE_REGLIST(gen8_rc_inst_regs, PF, ENGINE_INSTANCE, GUC_CAPTURE_LIST_CLASS_RENDER_COMPUTE),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_CAPTURE_LIST_CLASS_VIDEO),
MAKE_REGLIST(gen8_vd_inst_regs, PF, ENGINE_INSTANCE, GUC_CAPTURE_LIST_CLASS_VIDEO),
MAKE_REGLIST(xe_lp_vec_class_regs, PF, ENGINE_CLASS, GUC_CAPTURE_LIST_CLASS_VIDEOENHANCE),
MAKE_REGLIST(gen8_vec_inst_regs, PF, ENGINE_INSTANCE, GUC_CAPTURE_LIST_CLASS_VIDEOENHANCE),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_CAPTURE_LIST_CLASS_BLITTER),
MAKE_REGLIST(gen8_blt_inst_regs, PF, ENGINE_INSTANCE, GUC_CAPTURE_LIST_CLASS_BLITTER),
MAKE_REGLIST(empty_regs_list, PF, ENGINE_CLASS, GUC_CAPTURE_LIST_CLASS_GSC_OTHER),
MAKE_REGLIST(xe_lp_gsc_inst_regs, PF, ENGINE_INSTANCE, GUC_CAPTURE_LIST_CLASS_GSC_OTHER),
{}
};
@ -260,11 +254,15 @@ struct __ext_steer_reg {
i915_mcr_reg_t reg;
};
static const struct __ext_steer_reg xe_extregs[] = {
static const struct __ext_steer_reg gen8_extregs[] = {
{"GEN8_SAMPLER_INSTDONE", GEN8_SAMPLER_INSTDONE},
{"GEN8_ROW_INSTDONE", GEN8_ROW_INSTDONE}
};
static const struct __ext_steer_reg xehpg_extregs[] = {
{"XEHPG_INSTDONE_GEOM_SVG", XEHPG_INSTDONE_GEOM_SVG}
};
static void __fill_ext_reg(struct __guc_mmio_reg_descr *ext,
const struct __ext_steer_reg *extlist,
int slice_id, int subslice_id)
@ -295,8 +293,8 @@ __alloc_ext_regs(struct __guc_mmio_reg_descr_group *newlist,
}
static void
guc_capture_alloc_steered_lists_xe_lpd(struct intel_guc *guc,
const struct __guc_mmio_reg_descr_group *lists)
guc_capture_alloc_steered_lists(struct intel_guc *guc,
const struct __guc_mmio_reg_descr_group *lists)
{
struct intel_gt *gt = guc_to_gt(guc);
int slice, subslice, iter, i, num_steer_regs, num_tot_regs = 0;
@ -304,74 +302,20 @@ guc_capture_alloc_steered_lists_xe_lpd(struct intel_guc *guc,
struct __guc_mmio_reg_descr_group *extlists;
struct __guc_mmio_reg_descr *extarray;
struct sseu_dev_info *sseu;
bool has_xehpg_extregs;
/* In XE_LPD we only have steered registers for the render-class */
/* steered registers currently only exist for the render-class */
list = guc_capture_get_one_list(lists, GUC_CAPTURE_LIST_INDEX_PF,
GUC_CAPTURE_LIST_TYPE_ENGINE_CLASS, GUC_RENDER_CLASS);
GUC_CAPTURE_LIST_TYPE_ENGINE_CLASS,
GUC_CAPTURE_LIST_CLASS_RENDER_COMPUTE);
/* skip if extlists was previously allocated */
if (!list || guc->capture->extlists)
return;
num_steer_regs = ARRAY_SIZE(xe_extregs);
has_xehpg_extregs = GRAPHICS_VER_FULL(gt->i915) >= IP_VER(12, 55);
sseu = &gt->info.sseu;
for_each_ss_steering(iter, gt, slice, subslice)
num_tot_regs += num_steer_regs;
if (!num_tot_regs)
return;
/* allocate an extra for an end marker */
extlists = kcalloc(2, sizeof(struct __guc_mmio_reg_descr_group), GFP_KERNEL);
if (!extlists)
return;
if (__alloc_ext_regs(&extlists[0], list, num_tot_regs)) {
kfree(extlists);
return;
}
extarray = extlists[0].extlist;
for_each_ss_steering(iter, gt, slice, subslice) {
for (i = 0; i < num_steer_regs; ++i) {
__fill_ext_reg(extarray, &xe_extregs[i], slice, subslice);
++extarray;
}
}
guc->capture->extlists = extlists;
}
static const struct __ext_steer_reg xehpg_extregs[] = {
{"XEHPG_INSTDONE_GEOM_SVG", XEHPG_INSTDONE_GEOM_SVG}
};
static bool __has_xehpg_extregs(u32 ipver)
{
return (ipver >= IP_VER(12, 55));
}
static void
guc_capture_alloc_steered_lists_xe_hpg(struct intel_guc *guc,
const struct __guc_mmio_reg_descr_group *lists,
u32 ipver)
{
struct intel_gt *gt = guc_to_gt(guc);
struct sseu_dev_info *sseu;
int slice, subslice, i, iter, num_steer_regs, num_tot_regs = 0;
const struct __guc_mmio_reg_descr_group *list;
struct __guc_mmio_reg_descr_group *extlists;
struct __guc_mmio_reg_descr *extarray;
/* In XE_LP / HPG we only have render-class steering registers during error-capture */
list = guc_capture_get_one_list(lists, GUC_CAPTURE_LIST_INDEX_PF,
GUC_CAPTURE_LIST_TYPE_ENGINE_CLASS, GUC_RENDER_CLASS);
/* skip if extlists was previously allocated */
if (!list || guc->capture->extlists)
return;
num_steer_regs = ARRAY_SIZE(xe_extregs);
if (__has_xehpg_extregs(ipver))
num_steer_regs = ARRAY_SIZE(gen8_extregs);
if (has_xehpg_extregs)
num_steer_regs += ARRAY_SIZE(xehpg_extregs);
sseu = &gt->info.sseu;
@ -393,11 +337,12 @@ guc_capture_alloc_steered_lists_xe_hpg(struct intel_guc *guc,
extarray = extlists[0].extlist;
for_each_ss_steering(iter, gt, slice, subslice) {
for (i = 0; i < ARRAY_SIZE(xe_extregs); ++i) {
__fill_ext_reg(extarray, &xe_extregs[i], slice, subslice);
for (i = 0; i < ARRAY_SIZE(gen8_extregs); ++i) {
__fill_ext_reg(extarray, &gen8_extregs[i], slice, subslice);
++extarray;
}
if (__has_xehpg_extregs(ipver)) {
if (has_xehpg_extregs) {
for (i = 0; i < ARRAY_SIZE(xehpg_extregs); ++i) {
__fill_ext_reg(extarray, &xehpg_extregs[i], slice, subslice);
++extarray;
@ -413,26 +358,22 @@ static const struct __guc_mmio_reg_descr_group *
guc_capture_get_device_reglist(struct intel_guc *guc)
{
struct drm_i915_private *i915 = guc_to_gt(guc)->i915;
const struct __guc_mmio_reg_descr_group *lists;
if (GRAPHICS_VER(i915) > 11) {
/*
* For certain engine classes, there are slice and subslice
* level registers requiring steering. We allocate and populate
* these at init time based on hw config add it as an extension
* list at the end of the pre-populated render list.
*/
if (IS_DG2(i915))
guc_capture_alloc_steered_lists_xe_hpg(guc, xe_lpd_lists, IP_VER(12, 55));
else if (IS_XEHPSDV(i915))
guc_capture_alloc_steered_lists_xe_hpg(guc, xe_lpd_lists, IP_VER(12, 50));
else
guc_capture_alloc_steered_lists_xe_lpd(guc, xe_lpd_lists);
if (GRAPHICS_VER(i915) >= 12)
lists = xe_lp_lists;
else
lists = gen8_lists;
return xe_lpd_lists;
}
/*
* For certain engine classes, there are slice and subslice
* level registers requiring steering. We allocate and populate
* these at init time based on hw config add it as an extension
* list at the end of the pre-populated render list.
*/
guc_capture_alloc_steered_lists(guc, lists);
/* if GuC submission is enabled on a non-POR platform, just use a common baseline */
return default_lists;
return lists;
}
static const char *
@ -456,17 +397,15 @@ static const char *
__stringify_engclass(u32 class)
{
switch (class) {
case GUC_RENDER_CLASS:
return "Render";
case GUC_VIDEO_CLASS:
case GUC_CAPTURE_LIST_CLASS_RENDER_COMPUTE:
return "Render/Compute";
case GUC_CAPTURE_LIST_CLASS_VIDEO:
return "Video";
case GUC_VIDEOENHANCE_CLASS:
case GUC_CAPTURE_LIST_CLASS_VIDEOENHANCE:
return "VideoEnhance";
case GUC_BLITTER_CLASS:
case GUC_CAPTURE_LIST_CLASS_BLITTER:
return "Blitter";
case GUC_COMPUTE_CLASS:
return "Compute";
case GUC_GSC_OTHER_CLASS:
case GUC_CAPTURE_LIST_CLASS_GSC_OTHER:
return "GSC-Other";
default:
break;
@ -1596,6 +1535,36 @@ void intel_guc_capture_free_node(struct intel_engine_coredump *ee)
ee->guc_capture_node = NULL;
}
bool intel_guc_capture_is_matching_engine(struct intel_gt *gt,
struct intel_context *ce,
struct intel_engine_cs *engine)
{
struct __guc_capture_parsed_output *n;
struct intel_guc *guc;
if (!gt || !ce || !engine)
return false;
guc = &gt->uc.guc;
if (!guc->capture)
return false;
/*
* Look for a matching GuC reported error capture node from
* the internal output link-list based on lrca, guc-id and engine
* identification.
*/
list_for_each_entry(n, &guc->capture->outlist, link) {
if (n->eng_inst == GUC_ID_TO_ENGINE_INSTANCE(engine->guc_id) &&
n->eng_class == GUC_ID_TO_ENGINE_CLASS(engine->guc_id) &&
n->guc_id == ce->guc_id.id &&
(n->lrca & CTX_GTT_ADDRESS_MASK) == (ce->lrc.lrca & CTX_GTT_ADDRESS_MASK))
return true;
}
return false;
}
void intel_guc_capture_get_matching_node(struct intel_gt *gt,
struct intel_engine_coredump *ee,
struct intel_context *ce)
@ -1611,6 +1580,7 @@ void intel_guc_capture_get_matching_node(struct intel_gt *gt,
return;
GEM_BUG_ON(ee->guc_capture_node);
/*
* Look for a matching GuC reported error capture node from
* the internal output link-list based on lrca, guc-id and engine

View File

@ -11,6 +11,7 @@
struct drm_i915_error_state_buf;
struct guc_gt_system_info;
struct intel_engine_coredump;
struct intel_engine_cs;
struct intel_context;
struct intel_gt;
struct intel_guc;
@ -20,6 +21,8 @@ int intel_guc_capture_print_engine_node(struct drm_i915_error_state_buf *m,
const struct intel_engine_coredump *ee);
void intel_guc_capture_get_matching_node(struct intel_gt *gt, struct intel_engine_coredump *ee,
struct intel_context *ce);
bool intel_guc_capture_is_matching_engine(struct intel_gt *gt, struct intel_context *ce,
struct intel_engine_cs *engine);
void intel_guc_capture_process(struct intel_guc *guc);
int intel_guc_capture_getlist(struct intel_guc *guc, u32 owner, u32 type, u32 classid,
void **outptr);

View File

@ -13,6 +13,30 @@
#include "intel_guc_ct.h"
#include "intel_guc_print.h"
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GUC)
enum {
CT_DEAD_ALIVE = 0,
CT_DEAD_SETUP,
CT_DEAD_WRITE,
CT_DEAD_DEADLOCK,
CT_DEAD_H2G_HAS_ROOM,
CT_DEAD_READ,
CT_DEAD_PROCESS_FAILED,
};
static void ct_dead_ct_worker_func(struct work_struct *w);
#define CT_DEAD(ct, reason) \
do { \
if (!(ct)->dead_ct_reported) { \
(ct)->dead_ct_reason |= 1 << CT_DEAD_##reason; \
queue_work(system_unbound_wq, &(ct)->dead_ct_worker); \
} \
} while (0)
#else
#define CT_DEAD(ct, reason) do { } while (0)
#endif
static inline struct intel_guc *ct_to_guc(struct intel_guc_ct *ct)
{
return container_of(ct, struct intel_guc, ct);
@ -93,6 +117,9 @@ void intel_guc_ct_init_early(struct intel_guc_ct *ct)
spin_lock_init(&ct->requests.lock);
INIT_LIST_HEAD(&ct->requests.pending);
INIT_LIST_HEAD(&ct->requests.incoming);
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GUC)
INIT_WORK(&ct->dead_ct_worker, ct_dead_ct_worker_func);
#endif
INIT_WORK(&ct->requests.worker, ct_incoming_request_worker_func);
tasklet_setup(&ct->receive_tasklet, ct_receive_tasklet_func);
init_waitqueue_head(&ct->wq);
@ -319,11 +346,16 @@ int intel_guc_ct_enable(struct intel_guc_ct *ct)
ct->enabled = true;
ct->stall_time = KTIME_MAX;
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GUC)
ct->dead_ct_reported = false;
ct->dead_ct_reason = CT_DEAD_ALIVE;
#endif
return 0;
err_out:
CT_PROBE_ERROR(ct, "Failed to enable CTB (%pe)\n", ERR_PTR(err));
CT_DEAD(ct, SETUP);
return err;
}
@ -434,6 +466,7 @@ static int ct_write(struct intel_guc_ct *ct,
corrupted:
CT_ERROR(ct, "Corrupted descriptor head=%u tail=%u status=%#x\n",
desc->head, desc->tail, desc->status);
CT_DEAD(ct, WRITE);
ctb->broken = true;
return -EPIPE;
}
@ -504,6 +537,7 @@ static inline bool ct_deadlocked(struct intel_guc_ct *ct)
CT_ERROR(ct, "Head: %u\n (Dwords)", ct->ctbs.recv.desc->head);
CT_ERROR(ct, "Tail: %u\n (Dwords)", ct->ctbs.recv.desc->tail);
CT_DEAD(ct, DEADLOCK);
ct->ctbs.send.broken = true;
}
@ -552,6 +586,7 @@ static inline bool h2g_has_room(struct intel_guc_ct *ct, u32 len_dw)
head, ctb->size);
desc->status |= GUC_CTB_STATUS_OVERFLOW;
ctb->broken = true;
CT_DEAD(ct, H2G_HAS_ROOM);
return false;
}
@ -902,12 +937,19 @@ static int ct_read(struct intel_guc_ct *ct, struct ct_incoming_msg **msg)
/* now update descriptor */
WRITE_ONCE(desc->head, head);
/*
* Wa_22016122933: Making sure the head update is
* visible to GuC right away
*/
intel_guc_write_barrier(ct_to_guc(ct));
return available - len;
corrupted:
CT_ERROR(ct, "Corrupted descriptor head=%u tail=%u status=%#x\n",
desc->head, desc->tail, desc->status);
ctb->broken = true;
CT_DEAD(ct, READ);
return -EPIPE;
}
@ -1057,6 +1099,7 @@ static bool ct_process_incoming_requests(struct intel_guc_ct *ct)
if (unlikely(err)) {
CT_ERROR(ct, "Failed to process CT message (%pe) %*ph\n",
ERR_PTR(err), 4 * request->size, request->msg);
CT_DEAD(ct, PROCESS_FAILED);
ct_free_msg(request);
}
@ -1233,3 +1276,19 @@ void intel_guc_ct_print_info(struct intel_guc_ct *ct,
drm_printf(p, "Tail: %u\n",
ct->ctbs.recv.desc->tail);
}
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GUC)
static void ct_dead_ct_worker_func(struct work_struct *w)
{
struct intel_guc_ct *ct = container_of(w, struct intel_guc_ct, dead_ct_worker);
struct intel_guc *guc = ct_to_guc(ct);
if (ct->dead_ct_reported)
return;
ct->dead_ct_reported = true;
guc_info(guc, "CTB is dead - reason=0x%X\n", ct->dead_ct_reason);
intel_klog_error_capture(guc_to_gt(guc), (intel_engine_mask_t)~0U);
}
#endif

View File

@ -85,6 +85,12 @@ struct intel_guc_ct {
/** @stall_time: time of first time a CTB submission is stalled */
ktime_t stall_time;
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GUC)
int dead_ct_reason;
bool dead_ct_reported;
struct work_struct dead_ct_worker;
#endif
};
void intel_guc_ct_init_early(struct intel_guc_ct *ct);

View File

@ -129,6 +129,7 @@ static inline bool guc_load_done(struct intel_uncore *uncore, u32 *status, bool
case INTEL_BOOTROM_STATUS_RC6CTXCONFIG_FAILED:
case INTEL_BOOTROM_STATUS_MPUMAP_INCORRECT:
case INTEL_BOOTROM_STATUS_EXCEPTION:
case INTEL_BOOTROM_STATUS_PROD_KEY_CHECK_FAILURE:
*success = false;
return true;
}
@ -190,8 +191,10 @@ static int guc_wait_ucode(struct intel_guc *guc)
if (!ret || !success)
break;
guc_dbg(guc, "load still in progress, count = %d, freq = %dMHz\n",
count, intel_rps_read_actual_frequency(&uncore->gt->rps));
guc_dbg(guc, "load still in progress, count = %d, freq = %dMHz, status = 0x%08X [0x%02X/%02X]\n",
count, intel_rps_read_actual_frequency(&uncore->gt->rps), status,
REG_FIELD_GET(GS_BOOTROM_MASK, status),
REG_FIELD_GET(GS_UKERNEL_MASK, status));
}
after = ktime_get();
delta = ktime_sub(after, before);
@ -219,6 +222,11 @@ static int guc_wait_ucode(struct intel_guc *guc)
guc_info(guc, "firmware signature verification failed\n");
ret = -ENOEXEC;
break;
case INTEL_BOOTROM_STATUS_PROD_KEY_CHECK_FAILURE:
guc_info(guc, "firmware production part check failure\n");
ret = -ENOEXEC;
break;
}
switch (ukernel) {

View File

@ -411,6 +411,15 @@ enum guc_capture_type {
GUC_CAPTURE_LIST_TYPE_MAX,
};
/* Class indecies for capture_class and capture_instance arrays */
enum {
GUC_CAPTURE_LIST_CLASS_RENDER_COMPUTE = 0,
GUC_CAPTURE_LIST_CLASS_VIDEO = 1,
GUC_CAPTURE_LIST_CLASS_VIDEOENHANCE = 2,
GUC_CAPTURE_LIST_CLASS_BLITTER = 3,
GUC_CAPTURE_LIST_CLASS_GSC_OTHER = 4,
};
/* GuC Additional Data Struct */
struct guc_ads {
struct guc_mmio_reg_set reg_state_list[GUC_MAX_ENGINE_CLASSES][GUC_MAX_INSTANCES_PER_CLASS];
@ -451,7 +460,7 @@ enum guc_log_buffer_type {
GUC_MAX_LOG_BUFFER
};
/**
/*
* struct guc_log_buffer_state - GuC log buffer state
*
* Below state structure is used for coordination of retrieval of GuC firmware

View File

@ -277,6 +277,7 @@ int intel_guc_slpc_init(struct intel_guc_slpc *slpc)
slpc->max_freq_softlimit = 0;
slpc->min_freq_softlimit = 0;
slpc->ignore_eff_freq = false;
slpc->min_is_rpmax = false;
slpc->boost_freq = 0;
@ -457,6 +458,29 @@ int intel_guc_slpc_get_max_freq(struct intel_guc_slpc *slpc, u32 *val)
return ret;
}
int intel_guc_slpc_set_ignore_eff_freq(struct intel_guc_slpc *slpc, bool val)
{
struct drm_i915_private *i915 = slpc_to_i915(slpc);
intel_wakeref_t wakeref;
int ret;
mutex_lock(&slpc->lock);
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
ret = slpc_set_param(slpc,
SLPC_PARAM_IGNORE_EFFICIENT_FREQUENCY,
val);
if (ret)
guc_probe_error(slpc_to_guc(slpc), "Failed to set efficient freq(%d): %pe\n",
val, ERR_PTR(ret));
else
slpc->ignore_eff_freq = val;
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
mutex_unlock(&slpc->lock);
return ret;
}
/**
* intel_guc_slpc_set_min_freq() - Set min frequency limit for SLPC.
* @slpc: pointer to intel_guc_slpc.
@ -482,16 +506,6 @@ int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
mutex_lock(&slpc->lock);
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
/* Ignore efficient freq if lower min freq is requested */
ret = slpc_set_param(slpc,
SLPC_PARAM_IGNORE_EFFICIENT_FREQUENCY,
val < slpc->rp1_freq);
if (ret) {
guc_probe_error(slpc_to_guc(slpc), "Failed to toggle efficient freq: %pe\n",
ERR_PTR(ret));
goto out;
}
ret = slpc_set_param(slpc,
SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
val);
@ -499,7 +513,6 @@ int intel_guc_slpc_set_min_freq(struct intel_guc_slpc *slpc, u32 val)
if (!ret)
slpc->min_freq_softlimit = val;
out:
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
mutex_unlock(&slpc->lock);
@ -752,6 +765,9 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
/* Set cached media freq ratio mode */
intel_guc_slpc_set_media_ratio_mode(slpc, slpc->media_ratio_mode);
/* Set cached value of ignore efficient freq */
intel_guc_slpc_set_ignore_eff_freq(slpc, slpc->ignore_eff_freq);
return 0;
}
@ -821,6 +837,8 @@ int intel_guc_slpc_print_info(struct intel_guc_slpc *slpc, struct drm_printer *p
slpc_decode_min_freq(slpc));
drm_printf(p, "\twaitboosts: %u\n",
slpc->num_boosts);
drm_printf(p, "\tBoosts outstanding: %u\n",
atomic_read(&slpc->num_waiters));
}
}

View File

@ -46,5 +46,6 @@ void intel_guc_slpc_boost(struct intel_guc_slpc *slpc);
void intel_guc_slpc_dec_waiters(struct intel_guc_slpc *slpc);
int intel_guc_slpc_unset_gucrc_mode(struct intel_guc_slpc *slpc);
int intel_guc_slpc_override_gucrc_mode(struct intel_guc_slpc *slpc, u32 mode);
int intel_guc_slpc_set_ignore_eff_freq(struct intel_guc_slpc *slpc, bool val);
#endif

View File

@ -31,6 +31,7 @@ struct intel_guc_slpc {
/* frequency softlimits */
u32 min_freq_softlimit;
u32 max_freq_softlimit;
bool ignore_eff_freq;
/* cached media ratio mode */
u32 media_ratio_mode;

View File

@ -1402,13 +1402,34 @@ static void __update_guc_busyness_stats(struct intel_guc *guc)
spin_unlock_irqrestore(&guc->timestamp.lock, flags);
}
static void __guc_context_update_stats(struct intel_context *ce)
{
struct intel_guc *guc = ce_to_guc(ce);
unsigned long flags;
spin_lock_irqsave(&guc->timestamp.lock, flags);
lrc_update_runtime(ce);
spin_unlock_irqrestore(&guc->timestamp.lock, flags);
}
static void guc_context_update_stats(struct intel_context *ce)
{
if (!intel_context_pin_if_active(ce))
return;
__guc_context_update_stats(ce);
intel_context_unpin(ce);
}
static void guc_timestamp_ping(struct work_struct *wrk)
{
struct intel_guc *guc = container_of(wrk, typeof(*guc),
timestamp.work.work);
struct intel_uc *uc = container_of(guc, typeof(*uc), guc);
struct intel_gt *gt = guc_to_gt(guc);
struct intel_context *ce;
intel_wakeref_t wakeref;
unsigned long index;
int srcu, ret;
/*
@ -1424,6 +1445,10 @@ static void guc_timestamp_ping(struct work_struct *wrk)
with_intel_runtime_pm(&gt->i915->runtime_pm, wakeref)
__update_guc_busyness_stats(guc);
/* adjust context stats for overflow */
xa_for_each(&guc->context_lookup, index, ce)
guc_context_update_stats(ce);
intel_gt_reset_unlock(gt, srcu);
guc_enable_busyness_worker(guc);
@ -1629,16 +1654,16 @@ static void guc_reset_state(struct intel_context *ce, u32 head, bool scrub)
static void guc_engine_reset_prepare(struct intel_engine_cs *engine)
{
if (!IS_GRAPHICS_VER(engine->i915, 11, 12))
return;
intel_engine_stop_cs(engine);
/*
* Wa_22011802037: In addition to stopping the cs, we need
* to wait for any pending mi force wakeups
*/
intel_engine_wait_for_pending_mi_fw(engine);
if (IS_MTL_GRAPHICS_STEP(engine->i915, M, STEP_A0, STEP_B0) ||
(GRAPHICS_VER(engine->i915) >= 11 &&
GRAPHICS_VER_FULL(engine->i915) < IP_VER(12, 70))) {
intel_engine_stop_cs(engine);
intel_engine_wait_for_pending_mi_fw(engine);
}
}
static void guc_reset_nop(struct intel_engine_cs *engine)
@ -2774,6 +2799,7 @@ static void guc_context_unpin(struct intel_context *ce)
{
struct intel_guc *guc = ce_to_guc(ce);
__guc_context_update_stats(ce);
unpin_guc_id(guc, ce);
lrc_unpin(ce);
@ -3455,6 +3481,7 @@ static void remove_from_context(struct i915_request *rq)
}
static const struct intel_context_ops guc_context_ops = {
.flags = COPS_RUNTIME_CYCLES,
.alloc = guc_context_alloc,
.close = guc_context_close,
@ -3473,6 +3500,8 @@ static const struct intel_context_ops guc_context_ops = {
.sched_disable = guc_context_sched_disable,
.update_stats = guc_context_update_stats,
.reset = lrc_reset,
.destroy = guc_context_destroy,
@ -3728,6 +3757,7 @@ static int guc_virtual_context_alloc(struct intel_context *ce)
}
static const struct intel_context_ops virtual_guc_context_ops = {
.flags = COPS_RUNTIME_CYCLES,
.alloc = guc_virtual_context_alloc,
.close = guc_context_close,
@ -3745,6 +3775,7 @@ static const struct intel_context_ops virtual_guc_context_ops = {
.exit = guc_virtual_context_exit,
.sched_disable = guc_context_sched_disable,
.update_stats = guc_context_update_stats,
.destroy = guc_context_destroy,
@ -4697,13 +4728,37 @@ static void capture_error_state(struct intel_guc *guc,
{
struct intel_gt *gt = guc_to_gt(guc);
struct drm_i915_private *i915 = gt->i915;
struct intel_engine_cs *engine = __context_to_physical_engine(ce);
intel_wakeref_t wakeref;
intel_engine_mask_t engine_mask;
if (intel_engine_is_virtual(ce->engine)) {
struct intel_engine_cs *e;
intel_engine_mask_t tmp, virtual_mask = ce->engine->mask;
engine_mask = 0;
for_each_engine_masked(e, ce->engine->gt, virtual_mask, tmp) {
bool match = intel_guc_capture_is_matching_engine(gt, ce, e);
if (match) {
intel_engine_set_hung_context(e, ce);
engine_mask |= e->mask;
atomic_inc(&i915->gpu_error.reset_engine_count[e->uabi_class]);
}
}
if (!engine_mask) {
guc_warn(guc, "No matching physical engine capture for virtual engine context 0x%04X / %s",
ce->guc_id.id, ce->engine->name);
engine_mask = ~0U;
}
} else {
intel_engine_set_hung_context(ce->engine, ce);
engine_mask = ce->engine->mask;
atomic_inc(&i915->gpu_error.reset_engine_count[ce->engine->uabi_class]);
}
intel_engine_set_hung_context(engine, ce);
with_intel_runtime_pm(&i915->runtime_pm, wakeref)
i915_capture_error_state(gt, engine->mask, CORE_DUMP_FLAG_IS_GUC_CAPTURE);
atomic_inc(&i915->gpu_error.reset_engine_count[engine->uabi_class]);
i915_capture_error_state(gt, engine_mask, CORE_DUMP_FLAG_IS_GUC_CAPTURE);
}
static void guc_context_replay(struct intel_context *ce)

View File

@ -18,6 +18,7 @@
#include "intel_uc.h"
#include "i915_drv.h"
#include "i915_hwmon.h"
static const struct intel_uc_ops uc_ops_off;
static const struct intel_uc_ops uc_ops_on;
@ -431,6 +432,9 @@ static bool uc_is_wopcm_locked(struct intel_uc *uc)
static int __uc_check_hw(struct intel_uc *uc)
{
if (uc->fw_table_invalid)
return -EIO;
if (!intel_uc_supports_guc(uc))
return 0;
@ -461,6 +465,7 @@ static int __uc_init_hw(struct intel_uc *uc)
struct intel_guc *guc = &uc->guc;
struct intel_huc *huc = &uc->huc;
int ret, attempts;
bool pl1en = false;
GEM_BUG_ON(!intel_uc_supports_guc(uc));
GEM_BUG_ON(!intel_uc_wants_guc(uc));
@ -491,6 +496,9 @@ static int __uc_init_hw(struct intel_uc *uc)
else
attempts = 1;
/* Disable a potentially low PL1 power limit to allow freq to be raised */
i915_hwmon_power_max_disable(gt->i915, &pl1en);
intel_rps_raise_unslice(&uc_to_gt(uc)->rps);
while (attempts--) {
@ -500,7 +508,7 @@ static int __uc_init_hw(struct intel_uc *uc)
*/
ret = __uc_sanitize(uc);
if (ret)
goto err_out;
goto err_rps;
intel_huc_fw_upload(huc);
intel_guc_ads_reset(guc);
@ -547,6 +555,8 @@ static int __uc_init_hw(struct intel_uc *uc)
intel_rps_lower_unslice(&uc_to_gt(uc)->rps);
}
i915_hwmon_power_max_restore(gt->i915, pl1en);
guc_info(guc, "submission %s\n", str_enabled_disabled(intel_uc_uses_guc_submission(uc)));
guc_info(guc, "SLPC %s\n", str_enabled_disabled(intel_uc_uses_guc_slpc(uc)));
@ -559,10 +569,12 @@ err_submission:
intel_guc_submission_disable(guc);
err_log_capture:
__uc_capture_load_err_log(uc);
err_out:
err_rps:
/* Return GT back to RPn */
intel_rps_lower_unslice(&uc_to_gt(uc)->rps);
i915_hwmon_power_max_restore(gt->i915, pl1en);
err_out:
__uc_sanitize(uc);
if (!ret) {

View File

@ -36,6 +36,7 @@ struct intel_uc {
struct drm_i915_gem_object *load_err_log;
bool reset_in_progress;
bool fw_table_invalid;
};
void intel_uc_init_early(struct intel_uc *uc);

View File

@ -17,6 +17,12 @@
#include "i915_drv.h"
#include "i915_reg.h"
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)
#define UNEXPECTED gt_probe_error
#else
#define UNEXPECTED gt_notice
#endif
static inline struct intel_gt *
____uc_fw_to_gt(struct intel_uc_fw *uc_fw, enum intel_uc_fw_type type)
{
@ -79,14 +85,15 @@ void intel_uc_fw_change_status(struct intel_uc_fw *uc_fw,
* security fixes, etc. to be enabled.
*/
#define INTEL_GUC_FIRMWARE_DEFS(fw_def, guc_maj, guc_mmp) \
fw_def(DG2, 0, guc_maj(dg2, 70, 5)) \
fw_def(ALDERLAKE_P, 0, guc_maj(adlp, 70, 5)) \
fw_def(METEORLAKE, 0, guc_maj(mtl, 70, 6, 6)) \
fw_def(DG2, 0, guc_maj(dg2, 70, 5, 1)) \
fw_def(ALDERLAKE_P, 0, guc_maj(adlp, 70, 5, 1)) \
fw_def(ALDERLAKE_P, 0, guc_mmp(adlp, 70, 1, 1)) \
fw_def(ALDERLAKE_P, 0, guc_mmp(adlp, 69, 0, 3)) \
fw_def(ALDERLAKE_S, 0, guc_maj(tgl, 70, 5)) \
fw_def(ALDERLAKE_S, 0, guc_maj(tgl, 70, 5, 1)) \
fw_def(ALDERLAKE_S, 0, guc_mmp(tgl, 70, 1, 1)) \
fw_def(ALDERLAKE_S, 0, guc_mmp(tgl, 69, 0, 3)) \
fw_def(DG1, 0, guc_maj(dg1, 70, 5)) \
fw_def(DG1, 0, guc_maj(dg1, 70, 5, 1)) \
fw_def(ROCKETLAKE, 0, guc_mmp(tgl, 70, 1, 1)) \
fw_def(TIGERLAKE, 0, guc_mmp(tgl, 70, 1, 1)) \
fw_def(JASPERLAKE, 0, guc_mmp(ehl, 70, 1, 1)) \
@ -140,7 +147,7 @@ void intel_uc_fw_change_status(struct intel_uc_fw *uc_fw,
__stringify(patch_) ".bin"
/* Minor for internal driver use, not part of file name */
#define MAKE_GUC_FW_PATH_MAJOR(prefix_, major_, minor_) \
#define MAKE_GUC_FW_PATH_MAJOR(prefix_, major_, minor_, patch_) \
__MAKE_UC_FW_PATH_MAJOR(prefix_, "guc", major_)
#define MAKE_GUC_FW_PATH_MMP(prefix_, major_, minor_, patch_) \
@ -196,9 +203,9 @@ struct __packed uc_fw_blob {
{ UC_FW_BLOB_BASE(major_, minor_, patch_, path_) \
.legacy = true }
#define GUC_FW_BLOB(prefix_, major_, minor_) \
UC_FW_BLOB_NEW(major_, minor_, 0, false, \
MAKE_GUC_FW_PATH_MAJOR(prefix_, major_, minor_))
#define GUC_FW_BLOB(prefix_, major_, minor_, patch_) \
UC_FW_BLOB_NEW(major_, minor_, patch_, false, \
MAKE_GUC_FW_PATH_MAJOR(prefix_, major_, minor_, patch_))
#define GUC_FW_BLOB_MMP(prefix_, major_, minor_, patch_) \
UC_FW_BLOB_OLD(major_, minor_, patch_, \
@ -232,20 +239,22 @@ struct fw_blobs_by_type {
u32 count;
};
static const struct uc_fw_platform_requirement blobs_guc[] = {
INTEL_GUC_FIRMWARE_DEFS(MAKE_FW_LIST, GUC_FW_BLOB, GUC_FW_BLOB_MMP)
};
static const struct uc_fw_platform_requirement blobs_huc[] = {
INTEL_HUC_FIRMWARE_DEFS(MAKE_FW_LIST, HUC_FW_BLOB, HUC_FW_BLOB_MMP, HUC_FW_BLOB_GSC)
};
static const struct fw_blobs_by_type blobs_all[INTEL_UC_FW_NUM_TYPES] = {
[INTEL_UC_FW_TYPE_GUC] = { blobs_guc, ARRAY_SIZE(blobs_guc) },
[INTEL_UC_FW_TYPE_HUC] = { blobs_huc, ARRAY_SIZE(blobs_huc) },
};
static void
__uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
{
static const struct uc_fw_platform_requirement blobs_guc[] = {
INTEL_GUC_FIRMWARE_DEFS(MAKE_FW_LIST, GUC_FW_BLOB, GUC_FW_BLOB_MMP)
};
static const struct uc_fw_platform_requirement blobs_huc[] = {
INTEL_HUC_FIRMWARE_DEFS(MAKE_FW_LIST, HUC_FW_BLOB, HUC_FW_BLOB_MMP, HUC_FW_BLOB_GSC)
};
static const struct fw_blobs_by_type blobs_all[INTEL_UC_FW_NUM_TYPES] = {
[INTEL_UC_FW_TYPE_GUC] = { blobs_guc, ARRAY_SIZE(blobs_guc) },
[INTEL_UC_FW_TYPE_HUC] = { blobs_huc, ARRAY_SIZE(blobs_huc) },
};
static bool verified[INTEL_UC_FW_NUM_TYPES];
const struct uc_fw_platform_requirement *fw_blobs;
enum intel_platform p = INTEL_INFO(i915)->platform;
u32 fw_count;
@ -285,6 +294,11 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
continue;
if (uc_fw->file_selected.path) {
/*
* Continuing an earlier search after a found blob failed to load.
* Once the previously chosen path has been found, clear it out
* and let the search continue from there.
*/
if (uc_fw->file_selected.path == blob->path)
uc_fw->file_selected.path = NULL;
@ -295,6 +309,7 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
uc_fw->file_wanted.path = blob->path;
uc_fw->file_wanted.ver.major = blob->major;
uc_fw->file_wanted.ver.minor = blob->minor;
uc_fw->file_wanted.ver.patch = blob->patch;
uc_fw->loaded_via_gsc = blob->loaded_via_gsc;
found = true;
break;
@ -304,76 +319,111 @@ __uc_fw_auto_select(struct drm_i915_private *i915, struct intel_uc_fw *uc_fw)
/* Failed to find a match for the last attempt?! */
uc_fw->file_selected.path = NULL;
}
}
static bool validate_fw_table_type(struct drm_i915_private *i915, enum intel_uc_fw_type type)
{
const struct uc_fw_platform_requirement *fw_blobs;
u32 fw_count;
int i, j;
if (type >= ARRAY_SIZE(blobs_all)) {
drm_err(&i915->drm, "No blob array for %s\n", intel_uc_fw_type_repr(type));
return false;
}
fw_blobs = blobs_all[type].blobs;
fw_count = blobs_all[type].count;
if (!fw_count)
return true;
/* make sure the list is ordered as expected */
if (IS_ENABLED(CONFIG_DRM_I915_SELFTEST) && !verified[uc_fw->type]) {
verified[uc_fw->type] = true;
for (i = 1; i < fw_count; i++) {
/* Next platform is good: */
if (fw_blobs[i].p < fw_blobs[i - 1].p)
for (i = 1; i < fw_count; i++) {
/* Versionless file names must be unique per platform: */
for (j = i + 1; j < fw_count; j++) {
/* Same platform? */
if (fw_blobs[i].p != fw_blobs[j].p)
continue;
/* Next platform revision is good: */
if (fw_blobs[i].p == fw_blobs[i - 1].p &&
fw_blobs[i].rev < fw_blobs[i - 1].rev)
if (fw_blobs[i].blob.path != fw_blobs[j].blob.path)
continue;
/* Platform/revision must be in order: */
if (fw_blobs[i].p != fw_blobs[i - 1].p ||
fw_blobs[i].rev != fw_blobs[i - 1].rev)
goto bad;
drm_err(&i915->drm, "Duplicate %s blobs: %s r%u %s%d.%d.%d [%s] matches %s%d.%d.%d [%s]\n",
intel_uc_fw_type_repr(type),
intel_platform_name(fw_blobs[j].p), fw_blobs[j].rev,
fw_blobs[j].blob.legacy ? "L" : "v",
fw_blobs[j].blob.major, fw_blobs[j].blob.minor,
fw_blobs[j].blob.patch, fw_blobs[j].blob.path,
fw_blobs[i].blob.legacy ? "L" : "v",
fw_blobs[i].blob.major, fw_blobs[i].blob.minor,
fw_blobs[i].blob.patch, fw_blobs[i].blob.path);
}
/* Next major version is good: */
if (fw_blobs[i].blob.major < fw_blobs[i - 1].blob.major)
/* Next platform is good: */
if (fw_blobs[i].p < fw_blobs[i - 1].p)
continue;
/* Next platform revision is good: */
if (fw_blobs[i].p == fw_blobs[i - 1].p &&
fw_blobs[i].rev < fw_blobs[i - 1].rev)
continue;
/* Platform/revision must be in order: */
if (fw_blobs[i].p != fw_blobs[i - 1].p ||
fw_blobs[i].rev != fw_blobs[i - 1].rev)
goto bad;
/* Next major version is good: */
if (fw_blobs[i].blob.major < fw_blobs[i - 1].blob.major)
continue;
/* New must be before legacy: */
if (!fw_blobs[i].blob.legacy && fw_blobs[i - 1].blob.legacy)
goto bad;
/* New to legacy also means 0.0 to X.Y (HuC), or X.0 to X.Y (GuC) */
if (fw_blobs[i].blob.legacy && !fw_blobs[i - 1].blob.legacy) {
if (!fw_blobs[i - 1].blob.major)
continue;
/* New must be before legacy: */
if (!fw_blobs[i].blob.legacy && fw_blobs[i - 1].blob.legacy)
goto bad;
/* New to legacy also means 0.0 to X.Y (HuC), or X.0 to X.Y (GuC) */
if (fw_blobs[i].blob.legacy && !fw_blobs[i - 1].blob.legacy) {
if (!fw_blobs[i - 1].blob.major)
continue;
if (fw_blobs[i].blob.major == fw_blobs[i - 1].blob.major)
continue;
}
/* Major versions must be in order: */
if (fw_blobs[i].blob.major != fw_blobs[i - 1].blob.major)
goto bad;
/* Next minor version is good: */
if (fw_blobs[i].blob.minor < fw_blobs[i - 1].blob.minor)
if (fw_blobs[i].blob.major == fw_blobs[i - 1].blob.major)
continue;
}
/* Minor versions must be in order: */
if (fw_blobs[i].blob.minor != fw_blobs[i - 1].blob.minor)
goto bad;
/* Major versions must be in order: */
if (fw_blobs[i].blob.major != fw_blobs[i - 1].blob.major)
goto bad;
/* Patch versions must be in order: */
if (fw_blobs[i].blob.patch <= fw_blobs[i - 1].blob.patch)
continue;
/* Next minor version is good: */
if (fw_blobs[i].blob.minor < fw_blobs[i - 1].blob.minor)
continue;
/* Minor versions must be in order: */
if (fw_blobs[i].blob.minor != fw_blobs[i - 1].blob.minor)
goto bad;
/* Patch versions must be in order and unique: */
if (fw_blobs[i].blob.patch < fw_blobs[i - 1].blob.patch)
continue;
bad:
drm_err(&i915->drm, "Invalid %s blob order: %s r%u %s%d.%d.%d comes before %s r%u %s%d.%d.%d\n",
intel_uc_fw_type_repr(uc_fw->type),
intel_platform_name(fw_blobs[i - 1].p), fw_blobs[i - 1].rev,
fw_blobs[i - 1].blob.legacy ? "L" : "v",
fw_blobs[i - 1].blob.major,
fw_blobs[i - 1].blob.minor,
fw_blobs[i - 1].blob.patch,
intel_platform_name(fw_blobs[i].p), fw_blobs[i].rev,
fw_blobs[i].blob.legacy ? "L" : "v",
fw_blobs[i].blob.major,
fw_blobs[i].blob.minor,
fw_blobs[i].blob.patch);
uc_fw->file_selected.path = NULL;
}
drm_err(&i915->drm, "Invalid %s blob order: %s r%u %s%d.%d.%d comes before %s r%u %s%d.%d.%d\n",
intel_uc_fw_type_repr(type),
intel_platform_name(fw_blobs[i - 1].p), fw_blobs[i - 1].rev,
fw_blobs[i - 1].blob.legacy ? "L" : "v",
fw_blobs[i - 1].blob.major,
fw_blobs[i - 1].blob.minor,
fw_blobs[i - 1].blob.patch,
intel_platform_name(fw_blobs[i].p), fw_blobs[i].rev,
fw_blobs[i].blob.legacy ? "L" : "v",
fw_blobs[i].blob.major,
fw_blobs[i].blob.minor,
fw_blobs[i].blob.patch);
return false;
}
return true;
}
static const char *__override_guc_firmware_path(struct drm_i915_private *i915)
@ -428,7 +478,8 @@ static void __uc_fw_user_override(struct drm_i915_private *i915, struct intel_uc
void intel_uc_fw_init_early(struct intel_uc_fw *uc_fw,
enum intel_uc_fw_type type)
{
struct drm_i915_private *i915 = ____uc_fw_to_gt(uc_fw, type)->i915;
struct intel_gt *gt = ____uc_fw_to_gt(uc_fw, type);
struct drm_i915_private *i915 = gt->i915;
/*
* we use FIRMWARE_UNINITIALIZED to detect checks against uc_fw->status
@ -441,6 +492,12 @@ void intel_uc_fw_init_early(struct intel_uc_fw *uc_fw,
uc_fw->type = type;
if (HAS_GT_UC(i915)) {
if (!validate_fw_table_type(i915, type)) {
gt->uc.fw_table_invalid = true;
intel_uc_fw_change_status(uc_fw, INTEL_UC_FIRMWARE_NOT_SUPPORTED);
return;
}
__uc_fw_auto_select(i915, uc_fw);
__uc_fw_user_override(i915, uc_fw);
}
@ -782,10 +839,10 @@ int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw)
if (uc_fw->file_wanted.ver.major && uc_fw->file_selected.ver.major) {
/* Check the file's major version was as it claimed */
if (uc_fw->file_selected.ver.major != uc_fw->file_wanted.ver.major) {
gt_notice(gt, "%s firmware %s: unexpected version: %u.%u != %u.%u\n",
intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
uc_fw->file_selected.ver.major, uc_fw->file_selected.ver.minor,
uc_fw->file_wanted.ver.major, uc_fw->file_wanted.ver.minor);
UNEXPECTED(gt, "%s firmware %s: unexpected version: %u.%u != %u.%u\n",
intel_uc_fw_type_repr(uc_fw->type), uc_fw->file_selected.path,
uc_fw->file_selected.ver.major, uc_fw->file_selected.ver.minor,
uc_fw->file_wanted.ver.major, uc_fw->file_wanted.ver.minor);
if (!intel_uc_fw_is_overridden(uc_fw)) {
err = -ENOEXEC;
goto fail;
@ -793,6 +850,9 @@ int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw)
} else {
if (uc_fw->file_selected.ver.minor < uc_fw->file_wanted.ver.minor)
old_ver = true;
else if ((uc_fw->file_selected.ver.minor == uc_fw->file_wanted.ver.minor) &&
(uc_fw->file_selected.ver.patch < uc_fw->file_wanted.ver.patch))
old_ver = true;
}
}
@ -800,12 +860,16 @@ int intel_uc_fw_fetch(struct intel_uc_fw *uc_fw)
/* Preserve the version that was really wanted */
memcpy(&uc_fw->file_wanted, &file_ideal, sizeof(uc_fw->file_wanted));
gt_notice(gt, "%s firmware %s (%d.%d) is recommended, but only %s (%d.%d) was found\n",
intel_uc_fw_type_repr(uc_fw->type),
uc_fw->file_wanted.path,
uc_fw->file_wanted.ver.major, uc_fw->file_wanted.ver.minor,
uc_fw->file_selected.path,
uc_fw->file_selected.ver.major, uc_fw->file_selected.ver.minor);
UNEXPECTED(gt, "%s firmware %s (%d.%d.%d) is recommended, but only %s (%d.%d.%d) was found\n",
intel_uc_fw_type_repr(uc_fw->type),
uc_fw->file_wanted.path,
uc_fw->file_wanted.ver.major,
uc_fw->file_wanted.ver.minor,
uc_fw->file_wanted.ver.patch,
uc_fw->file_selected.path,
uc_fw->file_selected.ver.major,
uc_fw->file_selected.ver.minor,
uc_fw->file_selected.ver.patch);
gt_info(gt, "Consider updating your linux-firmware pkg or downloading from %s\n",
INTEL_UC_FIRMWARE_URL);
}
@ -893,9 +957,15 @@ static void uc_fw_bind_ggtt(struct intel_uc_fw *uc_fw)
pte_flags |= PTE_LM;
if (ggtt->vm.raw_insert_entries)
ggtt->vm.raw_insert_entries(&ggtt->vm, dummy, I915_CACHE_NONE, pte_flags);
ggtt->vm.raw_insert_entries(&ggtt->vm, dummy,
i915_gem_get_pat_index(ggtt->vm.i915,
I915_CACHE_NONE),
pte_flags);
else
ggtt->vm.insert_entries(&ggtt->vm, dummy, I915_CACHE_NONE, pte_flags);
ggtt->vm.insert_entries(&ggtt->vm, dummy,
i915_gem_get_pat_index(ggtt->vm.i915,
I915_CACHE_NONE),
pte_flags);
}
static void uc_fw_unbind_ggtt(struct intel_uc_fw *uc_fw)

View File

@ -330,7 +330,7 @@ void intel_vgpu_reset_resource(struct intel_vgpu *vgpu)
/**
* intel_vgpu_alloc_resource() - allocate HW resource for a vGPU
* @vgpu: vGPU
* @param: vGPU creation params
* @conf: vGPU creation params
*
* This function is used to allocate HW resource for a vGPU. User specifies
* the resource configuration through the creation params.

View File

@ -49,9 +49,9 @@ void i915_active_noop(struct dma_fence *fence, struct dma_fence_cb *cb);
/**
* __i915_active_fence_init - prepares the activity tracker for use
* @active - the active tracker
* @fence - initial fence to track, can be NULL
* @func - a callback when then the tracker is retired (becomes idle),
* @active: the active tracker
* @fence: initial fence to track, can be NULL
* @fn: a callback when then the tracker is retired (becomes idle),
* can be NULL
*
* i915_active_fence_init() prepares the embedded @active struct for use as
@ -77,8 +77,8 @@ __i915_active_fence_set(struct i915_active_fence *active,
/**
* i915_active_fence_set - updates the tracker to watch the current fence
* @active - the active tracker
* @rq - the request to watch
* @active: the active tracker
* @rq: the request to watch
*
* i915_active_fence_set() watches the given @rq for completion. While
* that @rq is busy, the @active reports busy. When that @rq is signaled
@ -89,7 +89,7 @@ i915_active_fence_set(struct i915_active_fence *active,
struct i915_request *rq);
/**
* i915_active_fence_get - return a reference to the active fence
* @active - the active tracker
* @active: the active tracker
*
* i915_active_fence_get() returns a reference to the active fence,
* or NULL if the active tracker is idle. The reference is obtained under RCU,
@ -111,7 +111,7 @@ i915_active_fence_get(struct i915_active_fence *active)
/**
* i915_active_fence_isset - report whether the active tracker is assigned
* @active - the active tracker
* @active: the active tracker
*
* i915_active_fence_isset() returns true if the active tracker is currently
* assigned to a fence. Due to the lazy retiring, that fence may be idle

View File

@ -138,21 +138,54 @@ static const char *stringify_vma_type(const struct i915_vma *vma)
return "ppgtt";
}
static const char *i915_cache_level_str(struct drm_i915_private *i915, int type)
static const char *i915_cache_level_str(struct drm_i915_gem_object *obj)
{
switch (type) {
case I915_CACHE_NONE: return " uncached";
case I915_CACHE_LLC: return HAS_LLC(i915) ? " LLC" : " snooped";
case I915_CACHE_L3_LLC: return " L3+LLC";
case I915_CACHE_WT: return " WT";
default: return "";
struct drm_i915_private *i915 = obj_to_i915(obj);
if (IS_METEORLAKE(i915)) {
switch (obj->pat_index) {
case 0: return " WB";
case 1: return " WT";
case 2: return " UC";
case 3: return " WB (1-Way Coh)";
case 4: return " WB (2-Way Coh)";
default: return " not defined";
}
} else if (IS_PONTEVECCHIO(i915)) {
switch (obj->pat_index) {
case 0: return " UC";
case 1: return " WC";
case 2: return " WT";
case 3: return " WB";
case 4: return " WT (CLOS1)";
case 5: return " WB (CLOS1)";
case 6: return " WT (CLOS2)";
case 7: return " WT (CLOS2)";
default: return " not defined";
}
} else if (GRAPHICS_VER(i915) >= 12) {
switch (obj->pat_index) {
case 0: return " WB";
case 1: return " WC";
case 2: return " WT";
case 3: return " UC";
default: return " not defined";
}
} else {
switch (obj->pat_index) {
case 0: return " UC";
case 1: return HAS_LLC(i915) ?
" LLC" : " snooped";
case 2: return " L3+LLC";
case 3: return " WT";
default: return " not defined";
}
}
}
void
i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
{
struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
struct i915_vma *vma;
int pin_count = 0;
@ -164,7 +197,7 @@ i915_debugfs_describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
obj->base.size / 1024,
obj->read_domains,
obj->write_domain,
i915_cache_level_str(dev_priv, obj->cache_level),
i915_cache_level_str(obj),
obj->mm.dirty ? " dirty" : "",
obj->mm.madv == I915_MADV_DONTNEED ? " purgeable" : "");
if (obj->base.name)

View File

@ -147,11 +147,7 @@ void i915_drm_client_fdinfo(struct seq_file *m, struct file *f)
PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
seq_printf(m, "drm-client-id:\t%u\n", client->id);
/*
* Temporarily skip showing client engine information with GuC submission till
* fetching engine busyness is implemented in the GuC submission backend
*/
if (GRAPHICS_VER(i915) < 8 || intel_uc_uses_guc_submission(&i915->gt0.uc))
if (GRAPHICS_VER(i915) < 8)
return;
for (i = 0; i < ARRAY_SIZE(uabi_class_names); i++)

View File

@ -381,11 +381,11 @@ static inline struct intel_gt *to_gt(struct drm_i915_private *i915)
}
/* Simple iterator over all initialised engines */
#define for_each_engine(engine__, dev_priv__, id__) \
#define for_each_engine(engine__, gt__, id__) \
for ((id__) = 0; \
(id__) < I915_NUM_ENGINES; \
(id__)++) \
for_each_if ((engine__) = (dev_priv__)->engine[(id__)])
for_each_if ((engine__) = (gt__)->engine[(id__)])
/* Iterator over subset of engines selected by mask */
#define for_each_engine_masked(engine__, gt__, mask__, tmp__) \
@ -407,11 +407,11 @@ static inline struct intel_gt *to_gt(struct drm_i915_private *i915)
(engine__) && (engine__)->uabi_class == (class__); \
(engine__) = rb_to_uabi_engine(rb_next(&(engine__)->uabi_node)))
#define INTEL_INFO(dev_priv) (&(dev_priv)->__info)
#define RUNTIME_INFO(dev_priv) (&(dev_priv)->__runtime)
#define DRIVER_CAPS(dev_priv) (&(dev_priv)->caps)
#define INTEL_INFO(i915) (&(i915)->__info)
#define RUNTIME_INFO(i915) (&(i915)->__runtime)
#define DRIVER_CAPS(i915) (&(i915)->caps)
#define INTEL_DEVID(dev_priv) (RUNTIME_INFO(dev_priv)->device_id)
#define INTEL_DEVID(i915) (RUNTIME_INFO(i915)->device_id)
#define IP_VER(ver, rel) ((ver) << 8 | (rel))
@ -431,7 +431,7 @@ static inline struct intel_gt *to_gt(struct drm_i915_private *i915)
#define IS_DISPLAY_VER(i915, from, until) \
(DISPLAY_VER(i915) >= (from) && DISPLAY_VER(i915) <= (until))
#define INTEL_REVID(dev_priv) (to_pci_dev((dev_priv)->drm.dev)->revision)
#define INTEL_REVID(i915) (to_pci_dev((i915)->drm.dev)->revision)
#define INTEL_DISPLAY_STEP(__i915) (RUNTIME_INFO(__i915)->step.display_step)
#define INTEL_GRAPHICS_STEP(__i915) (RUNTIME_INFO(__i915)->step.graphics_step)
@ -516,135 +516,135 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
return ((mask << (msb - pb)) & (mask << (msb - s))) & BIT(msb);
}
#define IS_MOBILE(dev_priv) (INTEL_INFO(dev_priv)->is_mobile)
#define IS_DGFX(dev_priv) (INTEL_INFO(dev_priv)->is_dgfx)
#define IS_MOBILE(i915) (INTEL_INFO(i915)->is_mobile)
#define IS_DGFX(i915) (INTEL_INFO(i915)->is_dgfx)
#define IS_I830(dev_priv) IS_PLATFORM(dev_priv, INTEL_I830)
#define IS_I845G(dev_priv) IS_PLATFORM(dev_priv, INTEL_I845G)
#define IS_I85X(dev_priv) IS_PLATFORM(dev_priv, INTEL_I85X)
#define IS_I865G(dev_priv) IS_PLATFORM(dev_priv, INTEL_I865G)
#define IS_I915G(dev_priv) IS_PLATFORM(dev_priv, INTEL_I915G)
#define IS_I915GM(dev_priv) IS_PLATFORM(dev_priv, INTEL_I915GM)
#define IS_I945G(dev_priv) IS_PLATFORM(dev_priv, INTEL_I945G)
#define IS_I945GM(dev_priv) IS_PLATFORM(dev_priv, INTEL_I945GM)
#define IS_I965G(dev_priv) IS_PLATFORM(dev_priv, INTEL_I965G)
#define IS_I965GM(dev_priv) IS_PLATFORM(dev_priv, INTEL_I965GM)
#define IS_G45(dev_priv) IS_PLATFORM(dev_priv, INTEL_G45)
#define IS_GM45(dev_priv) IS_PLATFORM(dev_priv, INTEL_GM45)
#define IS_G4X(dev_priv) (IS_G45(dev_priv) || IS_GM45(dev_priv))
#define IS_PINEVIEW(dev_priv) IS_PLATFORM(dev_priv, INTEL_PINEVIEW)
#define IS_G33(dev_priv) IS_PLATFORM(dev_priv, INTEL_G33)
#define IS_IRONLAKE(dev_priv) IS_PLATFORM(dev_priv, INTEL_IRONLAKE)
#define IS_IRONLAKE_M(dev_priv) \
(IS_PLATFORM(dev_priv, INTEL_IRONLAKE) && IS_MOBILE(dev_priv))
#define IS_SANDYBRIDGE(dev_priv) IS_PLATFORM(dev_priv, INTEL_SANDYBRIDGE)
#define IS_IVYBRIDGE(dev_priv) IS_PLATFORM(dev_priv, INTEL_IVYBRIDGE)
#define IS_IVB_GT1(dev_priv) (IS_IVYBRIDGE(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 1)
#define IS_VALLEYVIEW(dev_priv) IS_PLATFORM(dev_priv, INTEL_VALLEYVIEW)
#define IS_CHERRYVIEW(dev_priv) IS_PLATFORM(dev_priv, INTEL_CHERRYVIEW)
#define IS_HASWELL(dev_priv) IS_PLATFORM(dev_priv, INTEL_HASWELL)
#define IS_BROADWELL(dev_priv) IS_PLATFORM(dev_priv, INTEL_BROADWELL)
#define IS_SKYLAKE(dev_priv) IS_PLATFORM(dev_priv, INTEL_SKYLAKE)
#define IS_BROXTON(dev_priv) IS_PLATFORM(dev_priv, INTEL_BROXTON)
#define IS_KABYLAKE(dev_priv) IS_PLATFORM(dev_priv, INTEL_KABYLAKE)
#define IS_GEMINILAKE(dev_priv) IS_PLATFORM(dev_priv, INTEL_GEMINILAKE)
#define IS_COFFEELAKE(dev_priv) IS_PLATFORM(dev_priv, INTEL_COFFEELAKE)
#define IS_COMETLAKE(dev_priv) IS_PLATFORM(dev_priv, INTEL_COMETLAKE)
#define IS_ICELAKE(dev_priv) IS_PLATFORM(dev_priv, INTEL_ICELAKE)
#define IS_JSL_EHL(dev_priv) (IS_PLATFORM(dev_priv, INTEL_JASPERLAKE) || \
IS_PLATFORM(dev_priv, INTEL_ELKHARTLAKE))
#define IS_TIGERLAKE(dev_priv) IS_PLATFORM(dev_priv, INTEL_TIGERLAKE)
#define IS_ROCKETLAKE(dev_priv) IS_PLATFORM(dev_priv, INTEL_ROCKETLAKE)
#define IS_DG1(dev_priv) IS_PLATFORM(dev_priv, INTEL_DG1)
#define IS_ALDERLAKE_S(dev_priv) IS_PLATFORM(dev_priv, INTEL_ALDERLAKE_S)
#define IS_ALDERLAKE_P(dev_priv) IS_PLATFORM(dev_priv, INTEL_ALDERLAKE_P)
#define IS_XEHPSDV(dev_priv) IS_PLATFORM(dev_priv, INTEL_XEHPSDV)
#define IS_DG2(dev_priv) IS_PLATFORM(dev_priv, INTEL_DG2)
#define IS_PONTEVECCHIO(dev_priv) IS_PLATFORM(dev_priv, INTEL_PONTEVECCHIO)
#define IS_METEORLAKE(dev_priv) IS_PLATFORM(dev_priv, INTEL_METEORLAKE)
#define IS_I830(i915) IS_PLATFORM(i915, INTEL_I830)
#define IS_I845G(i915) IS_PLATFORM(i915, INTEL_I845G)
#define IS_I85X(i915) IS_PLATFORM(i915, INTEL_I85X)
#define IS_I865G(i915) IS_PLATFORM(i915, INTEL_I865G)
#define IS_I915G(i915) IS_PLATFORM(i915, INTEL_I915G)
#define IS_I915GM(i915) IS_PLATFORM(i915, INTEL_I915GM)
#define IS_I945G(i915) IS_PLATFORM(i915, INTEL_I945G)
#define IS_I945GM(i915) IS_PLATFORM(i915, INTEL_I945GM)
#define IS_I965G(i915) IS_PLATFORM(i915, INTEL_I965G)
#define IS_I965GM(i915) IS_PLATFORM(i915, INTEL_I965GM)
#define IS_G45(i915) IS_PLATFORM(i915, INTEL_G45)
#define IS_GM45(i915) IS_PLATFORM(i915, INTEL_GM45)
#define IS_G4X(i915) (IS_G45(i915) || IS_GM45(i915))
#define IS_PINEVIEW(i915) IS_PLATFORM(i915, INTEL_PINEVIEW)
#define IS_G33(i915) IS_PLATFORM(i915, INTEL_G33)
#define IS_IRONLAKE(i915) IS_PLATFORM(i915, INTEL_IRONLAKE)
#define IS_IRONLAKE_M(i915) \
(IS_PLATFORM(i915, INTEL_IRONLAKE) && IS_MOBILE(i915))
#define IS_SANDYBRIDGE(i915) IS_PLATFORM(i915, INTEL_SANDYBRIDGE)
#define IS_IVYBRIDGE(i915) IS_PLATFORM(i915, INTEL_IVYBRIDGE)
#define IS_IVB_GT1(i915) (IS_IVYBRIDGE(i915) && \
INTEL_INFO(i915)->gt == 1)
#define IS_VALLEYVIEW(i915) IS_PLATFORM(i915, INTEL_VALLEYVIEW)
#define IS_CHERRYVIEW(i915) IS_PLATFORM(i915, INTEL_CHERRYVIEW)
#define IS_HASWELL(i915) IS_PLATFORM(i915, INTEL_HASWELL)
#define IS_BROADWELL(i915) IS_PLATFORM(i915, INTEL_BROADWELL)
#define IS_SKYLAKE(i915) IS_PLATFORM(i915, INTEL_SKYLAKE)
#define IS_BROXTON(i915) IS_PLATFORM(i915, INTEL_BROXTON)
#define IS_KABYLAKE(i915) IS_PLATFORM(i915, INTEL_KABYLAKE)
#define IS_GEMINILAKE(i915) IS_PLATFORM(i915, INTEL_GEMINILAKE)
#define IS_COFFEELAKE(i915) IS_PLATFORM(i915, INTEL_COFFEELAKE)
#define IS_COMETLAKE(i915) IS_PLATFORM(i915, INTEL_COMETLAKE)
#define IS_ICELAKE(i915) IS_PLATFORM(i915, INTEL_ICELAKE)
#define IS_JSL_EHL(i915) (IS_PLATFORM(i915, INTEL_JASPERLAKE) || \
IS_PLATFORM(i915, INTEL_ELKHARTLAKE))
#define IS_TIGERLAKE(i915) IS_PLATFORM(i915, INTEL_TIGERLAKE)
#define IS_ROCKETLAKE(i915) IS_PLATFORM(i915, INTEL_ROCKETLAKE)
#define IS_DG1(i915) IS_PLATFORM(i915, INTEL_DG1)
#define IS_ALDERLAKE_S(i915) IS_PLATFORM(i915, INTEL_ALDERLAKE_S)
#define IS_ALDERLAKE_P(i915) IS_PLATFORM(i915, INTEL_ALDERLAKE_P)
#define IS_XEHPSDV(i915) IS_PLATFORM(i915, INTEL_XEHPSDV)
#define IS_DG2(i915) IS_PLATFORM(i915, INTEL_DG2)
#define IS_PONTEVECCHIO(i915) IS_PLATFORM(i915, INTEL_PONTEVECCHIO)
#define IS_METEORLAKE(i915) IS_PLATFORM(i915, INTEL_METEORLAKE)
#define IS_METEORLAKE_M(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_METEORLAKE, INTEL_SUBPLATFORM_M)
#define IS_METEORLAKE_P(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_METEORLAKE, INTEL_SUBPLATFORM_P)
#define IS_DG2_G10(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_DG2, INTEL_SUBPLATFORM_G10)
#define IS_DG2_G11(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_DG2, INTEL_SUBPLATFORM_G11)
#define IS_DG2_G12(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_DG2, INTEL_SUBPLATFORM_G12)
#define IS_ADLS_RPLS(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_ALDERLAKE_S, INTEL_SUBPLATFORM_RPL)
#define IS_ADLP_N(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_ALDERLAKE_P, INTEL_SUBPLATFORM_N)
#define IS_ADLP_RPLP(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_ALDERLAKE_P, INTEL_SUBPLATFORM_RPL)
#define IS_ADLP_RPLU(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_ALDERLAKE_P, INTEL_SUBPLATFORM_RPLU)
#define IS_HSW_EARLY_SDV(dev_priv) (IS_HASWELL(dev_priv) && \
(INTEL_DEVID(dev_priv) & 0xFF00) == 0x0C00)
#define IS_BDW_ULT(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_BROADWELL, INTEL_SUBPLATFORM_ULT)
#define IS_BDW_ULX(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_BROADWELL, INTEL_SUBPLATFORM_ULX)
#define IS_BDW_GT3(dev_priv) (IS_BROADWELL(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 3)
#define IS_HSW_ULT(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_HASWELL, INTEL_SUBPLATFORM_ULT)
#define IS_HSW_GT3(dev_priv) (IS_HASWELL(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 3)
#define IS_HSW_GT1(dev_priv) (IS_HASWELL(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 1)
#define IS_METEORLAKE_M(i915) \
IS_SUBPLATFORM(i915, INTEL_METEORLAKE, INTEL_SUBPLATFORM_M)
#define IS_METEORLAKE_P(i915) \
IS_SUBPLATFORM(i915, INTEL_METEORLAKE, INTEL_SUBPLATFORM_P)
#define IS_DG2_G10(i915) \
IS_SUBPLATFORM(i915, INTEL_DG2, INTEL_SUBPLATFORM_G10)
#define IS_DG2_G11(i915) \
IS_SUBPLATFORM(i915, INTEL_DG2, INTEL_SUBPLATFORM_G11)
#define IS_DG2_G12(i915) \
IS_SUBPLATFORM(i915, INTEL_DG2, INTEL_SUBPLATFORM_G12)
#define IS_ADLS_RPLS(i915) \
IS_SUBPLATFORM(i915, INTEL_ALDERLAKE_S, INTEL_SUBPLATFORM_RPL)
#define IS_ADLP_N(i915) \
IS_SUBPLATFORM(i915, INTEL_ALDERLAKE_P, INTEL_SUBPLATFORM_N)
#define IS_ADLP_RPLP(i915) \
IS_SUBPLATFORM(i915, INTEL_ALDERLAKE_P, INTEL_SUBPLATFORM_RPL)
#define IS_ADLP_RPLU(i915) \
IS_SUBPLATFORM(i915, INTEL_ALDERLAKE_P, INTEL_SUBPLATFORM_RPLU)
#define IS_HSW_EARLY_SDV(i915) (IS_HASWELL(i915) && \
(INTEL_DEVID(i915) & 0xFF00) == 0x0C00)
#define IS_BDW_ULT(i915) \
IS_SUBPLATFORM(i915, INTEL_BROADWELL, INTEL_SUBPLATFORM_ULT)
#define IS_BDW_ULX(i915) \
IS_SUBPLATFORM(i915, INTEL_BROADWELL, INTEL_SUBPLATFORM_ULX)
#define IS_BDW_GT3(i915) (IS_BROADWELL(i915) && \
INTEL_INFO(i915)->gt == 3)
#define IS_HSW_ULT(i915) \
IS_SUBPLATFORM(i915, INTEL_HASWELL, INTEL_SUBPLATFORM_ULT)
#define IS_HSW_GT3(i915) (IS_HASWELL(i915) && \
INTEL_INFO(i915)->gt == 3)
#define IS_HSW_GT1(i915) (IS_HASWELL(i915) && \
INTEL_INFO(i915)->gt == 1)
/* ULX machines are also considered ULT. */
#define IS_HSW_ULX(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_HASWELL, INTEL_SUBPLATFORM_ULX)
#define IS_SKL_ULT(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_SKYLAKE, INTEL_SUBPLATFORM_ULT)
#define IS_SKL_ULX(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_SKYLAKE, INTEL_SUBPLATFORM_ULX)
#define IS_KBL_ULT(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_KABYLAKE, INTEL_SUBPLATFORM_ULT)
#define IS_KBL_ULX(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_KABYLAKE, INTEL_SUBPLATFORM_ULX)
#define IS_SKL_GT2(dev_priv) (IS_SKYLAKE(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 2)
#define IS_SKL_GT3(dev_priv) (IS_SKYLAKE(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 3)
#define IS_SKL_GT4(dev_priv) (IS_SKYLAKE(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 4)
#define IS_KBL_GT2(dev_priv) (IS_KABYLAKE(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 2)
#define IS_KBL_GT3(dev_priv) (IS_KABYLAKE(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 3)
#define IS_CFL_ULT(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_COFFEELAKE, INTEL_SUBPLATFORM_ULT)
#define IS_CFL_ULX(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_COFFEELAKE, INTEL_SUBPLATFORM_ULX)
#define IS_CFL_GT2(dev_priv) (IS_COFFEELAKE(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 2)
#define IS_CFL_GT3(dev_priv) (IS_COFFEELAKE(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 3)
#define IS_HSW_ULX(i915) \
IS_SUBPLATFORM(i915, INTEL_HASWELL, INTEL_SUBPLATFORM_ULX)
#define IS_SKL_ULT(i915) \
IS_SUBPLATFORM(i915, INTEL_SKYLAKE, INTEL_SUBPLATFORM_ULT)
#define IS_SKL_ULX(i915) \
IS_SUBPLATFORM(i915, INTEL_SKYLAKE, INTEL_SUBPLATFORM_ULX)
#define IS_KBL_ULT(i915) \
IS_SUBPLATFORM(i915, INTEL_KABYLAKE, INTEL_SUBPLATFORM_ULT)
#define IS_KBL_ULX(i915) \
IS_SUBPLATFORM(i915, INTEL_KABYLAKE, INTEL_SUBPLATFORM_ULX)
#define IS_SKL_GT2(i915) (IS_SKYLAKE(i915) && \
INTEL_INFO(i915)->gt == 2)
#define IS_SKL_GT3(i915) (IS_SKYLAKE(i915) && \
INTEL_INFO(i915)->gt == 3)
#define IS_SKL_GT4(i915) (IS_SKYLAKE(i915) && \
INTEL_INFO(i915)->gt == 4)
#define IS_KBL_GT2(i915) (IS_KABYLAKE(i915) && \
INTEL_INFO(i915)->gt == 2)
#define IS_KBL_GT3(i915) (IS_KABYLAKE(i915) && \
INTEL_INFO(i915)->gt == 3)
#define IS_CFL_ULT(i915) \
IS_SUBPLATFORM(i915, INTEL_COFFEELAKE, INTEL_SUBPLATFORM_ULT)
#define IS_CFL_ULX(i915) \
IS_SUBPLATFORM(i915, INTEL_COFFEELAKE, INTEL_SUBPLATFORM_ULX)
#define IS_CFL_GT2(i915) (IS_COFFEELAKE(i915) && \
INTEL_INFO(i915)->gt == 2)
#define IS_CFL_GT3(i915) (IS_COFFEELAKE(i915) && \
INTEL_INFO(i915)->gt == 3)
#define IS_CML_ULT(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_COMETLAKE, INTEL_SUBPLATFORM_ULT)
#define IS_CML_ULX(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_COMETLAKE, INTEL_SUBPLATFORM_ULX)
#define IS_CML_GT2(dev_priv) (IS_COMETLAKE(dev_priv) && \
INTEL_INFO(dev_priv)->gt == 2)
#define IS_CML_ULT(i915) \
IS_SUBPLATFORM(i915, INTEL_COMETLAKE, INTEL_SUBPLATFORM_ULT)
#define IS_CML_ULX(i915) \
IS_SUBPLATFORM(i915, INTEL_COMETLAKE, INTEL_SUBPLATFORM_ULX)
#define IS_CML_GT2(i915) (IS_COMETLAKE(i915) && \
INTEL_INFO(i915)->gt == 2)
#define IS_ICL_WITH_PORT_F(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_ICELAKE, INTEL_SUBPLATFORM_PORTF)
#define IS_ICL_WITH_PORT_F(i915) \
IS_SUBPLATFORM(i915, INTEL_ICELAKE, INTEL_SUBPLATFORM_PORTF)
#define IS_TGL_UY(dev_priv) \
IS_SUBPLATFORM(dev_priv, INTEL_TIGERLAKE, INTEL_SUBPLATFORM_UY)
#define IS_TGL_UY(i915) \
IS_SUBPLATFORM(i915, INTEL_TIGERLAKE, INTEL_SUBPLATFORM_UY)
#define IS_SKL_GRAPHICS_STEP(p, since, until) (IS_SKYLAKE(p) && IS_GRAPHICS_STEP(p, since, until))
#define IS_KBL_GRAPHICS_STEP(dev_priv, since, until) \
(IS_KABYLAKE(dev_priv) && IS_GRAPHICS_STEP(dev_priv, since, until))
#define IS_KBL_DISPLAY_STEP(dev_priv, since, until) \
(IS_KABYLAKE(dev_priv) && IS_DISPLAY_STEP(dev_priv, since, until))
#define IS_KBL_GRAPHICS_STEP(i915, since, until) \
(IS_KABYLAKE(i915) && IS_GRAPHICS_STEP(i915, since, until))
#define IS_KBL_DISPLAY_STEP(i915, since, until) \
(IS_KABYLAKE(i915) && IS_DISPLAY_STEP(i915, since, until))
#define IS_JSL_EHL_GRAPHICS_STEP(p, since, until) \
(IS_JSL_EHL(p) && IS_GRAPHICS_STEP(p, since, until))
@ -720,9 +720,9 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
(IS_PONTEVECCHIO(__i915) && \
IS_GRAPHICS_STEP(__i915, since, until))
#define IS_LP(dev_priv) (INTEL_INFO(dev_priv)->is_lp)
#define IS_GEN9_LP(dev_priv) (GRAPHICS_VER(dev_priv) == 9 && IS_LP(dev_priv))
#define IS_GEN9_BC(dev_priv) (GRAPHICS_VER(dev_priv) == 9 && !IS_LP(dev_priv))
#define IS_LP(i915) (INTEL_INFO(i915)->is_lp)
#define IS_GEN9_LP(i915) (GRAPHICS_VER(i915) == 9 && IS_LP(i915))
#define IS_GEN9_BC(i915) (GRAPHICS_VER(i915) == 9 && !IS_LP(i915))
#define __HAS_ENGINE(engine_mask, id) ((engine_mask) & BIT(id))
#define HAS_ENGINE(gt, id) __HAS_ENGINE((gt)->info.engine_mask, id)
@ -747,180 +747,180 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
#define CCS_MASK(gt) \
ENGINE_INSTANCES_MASK(gt, CCS0, I915_MAX_CCS)
#define HAS_MEDIA_RATIO_MODE(dev_priv) (INTEL_INFO(dev_priv)->has_media_ratio_mode)
#define HAS_MEDIA_RATIO_MODE(i915) (INTEL_INFO(i915)->has_media_ratio_mode)
/*
* The Gen7 cmdparser copies the scanned buffer to the ggtt for execution
* All later gens can run the final buffer from the ppgtt
*/
#define CMDPARSER_USES_GGTT(dev_priv) (GRAPHICS_VER(dev_priv) == 7)
#define CMDPARSER_USES_GGTT(i915) (GRAPHICS_VER(i915) == 7)
#define HAS_LLC(dev_priv) (INTEL_INFO(dev_priv)->has_llc)
#define HAS_4TILE(dev_priv) (INTEL_INFO(dev_priv)->has_4tile)
#define HAS_SNOOP(dev_priv) (INTEL_INFO(dev_priv)->has_snoop)
#define HAS_EDRAM(dev_priv) ((dev_priv)->edram_size_mb)
#define HAS_SECURE_BATCHES(dev_priv) (GRAPHICS_VER(dev_priv) < 6)
#define HAS_WT(dev_priv) HAS_EDRAM(dev_priv)
#define HAS_LLC(i915) (INTEL_INFO(i915)->has_llc)
#define HAS_4TILE(i915) (INTEL_INFO(i915)->has_4tile)
#define HAS_SNOOP(i915) (INTEL_INFO(i915)->has_snoop)
#define HAS_EDRAM(i915) ((i915)->edram_size_mb)
#define HAS_SECURE_BATCHES(i915) (GRAPHICS_VER(i915) < 6)
#define HAS_WT(i915) HAS_EDRAM(i915)
#define HWS_NEEDS_PHYSICAL(dev_priv) (INTEL_INFO(dev_priv)->hws_needs_physical)
#define HWS_NEEDS_PHYSICAL(i915) (INTEL_INFO(i915)->hws_needs_physical)
#define HAS_LOGICAL_RING_CONTEXTS(dev_priv) \
(INTEL_INFO(dev_priv)->has_logical_ring_contexts)
#define HAS_LOGICAL_RING_ELSQ(dev_priv) \
(INTEL_INFO(dev_priv)->has_logical_ring_elsq)
#define HAS_LOGICAL_RING_CONTEXTS(i915) \
(INTEL_INFO(i915)->has_logical_ring_contexts)
#define HAS_LOGICAL_RING_ELSQ(i915) \
(INTEL_INFO(i915)->has_logical_ring_elsq)
#define HAS_EXECLISTS(dev_priv) HAS_LOGICAL_RING_CONTEXTS(dev_priv)
#define HAS_EXECLISTS(i915) HAS_LOGICAL_RING_CONTEXTS(i915)
#define INTEL_PPGTT(dev_priv) (RUNTIME_INFO(dev_priv)->ppgtt_type)
#define HAS_PPGTT(dev_priv) \
(INTEL_PPGTT(dev_priv) != INTEL_PPGTT_NONE)
#define HAS_FULL_PPGTT(dev_priv) \
(INTEL_PPGTT(dev_priv) >= INTEL_PPGTT_FULL)
#define INTEL_PPGTT(i915) (RUNTIME_INFO(i915)->ppgtt_type)
#define HAS_PPGTT(i915) \
(INTEL_PPGTT(i915) != INTEL_PPGTT_NONE)
#define HAS_FULL_PPGTT(i915) \
(INTEL_PPGTT(i915) >= INTEL_PPGTT_FULL)
#define HAS_PAGE_SIZES(dev_priv, sizes) ({ \
#define HAS_PAGE_SIZES(i915, sizes) ({ \
GEM_BUG_ON((sizes) == 0); \
((sizes) & ~RUNTIME_INFO(dev_priv)->page_sizes) == 0; \
((sizes) & ~RUNTIME_INFO(i915)->page_sizes) == 0; \
})
#define HAS_OVERLAY(dev_priv) (INTEL_INFO(dev_priv)->display.has_overlay)
#define OVERLAY_NEEDS_PHYSICAL(dev_priv) \
(INTEL_INFO(dev_priv)->display.overlay_needs_physical)
#define HAS_OVERLAY(i915) (INTEL_INFO(i915)->display.has_overlay)
#define OVERLAY_NEEDS_PHYSICAL(i915) \
(INTEL_INFO(i915)->display.overlay_needs_physical)
/* Early gen2 have a totally busted CS tlb and require pinned batches. */
#define HAS_BROKEN_CS_TLB(dev_priv) (IS_I830(dev_priv) || IS_I845G(dev_priv))
#define HAS_BROKEN_CS_TLB(i915) (IS_I830(i915) || IS_I845G(i915))
#define NEEDS_RC6_CTX_CORRUPTION_WA(dev_priv) \
(IS_BROADWELL(dev_priv) || GRAPHICS_VER(dev_priv) == 9)
#define NEEDS_RC6_CTX_CORRUPTION_WA(i915) \
(IS_BROADWELL(i915) || GRAPHICS_VER(i915) == 9)
/* WaRsDisableCoarsePowerGating:skl,cnl */
#define NEEDS_WaRsDisableCoarsePowerGating(dev_priv) \
(IS_SKL_GT3(dev_priv) || IS_SKL_GT4(dev_priv))
#define NEEDS_WaRsDisableCoarsePowerGating(i915) \
(IS_SKL_GT3(i915) || IS_SKL_GT4(i915))
#define HAS_GMBUS_IRQ(dev_priv) (DISPLAY_VER(dev_priv) >= 4)
#define HAS_GMBUS_BURST_READ(dev_priv) (DISPLAY_VER(dev_priv) >= 11 || \
IS_GEMINILAKE(dev_priv) || \
IS_KABYLAKE(dev_priv))
#define HAS_GMBUS_IRQ(i915) (DISPLAY_VER(i915) >= 4)
#define HAS_GMBUS_BURST_READ(i915) (DISPLAY_VER(i915) >= 11 || \
IS_GEMINILAKE(i915) || \
IS_KABYLAKE(i915))
/* With the 945 and later, Y tiling got adjusted so that it was 32 128-byte
* rows, which changed the alignment requirements and fence programming.
*/
#define HAS_128_BYTE_Y_TILING(dev_priv) (GRAPHICS_VER(dev_priv) != 2 && \
!(IS_I915G(dev_priv) || IS_I915GM(dev_priv)))
#define SUPPORTS_TV(dev_priv) (INTEL_INFO(dev_priv)->display.supports_tv)
#define I915_HAS_HOTPLUG(dev_priv) (INTEL_INFO(dev_priv)->display.has_hotplug)
#define HAS_128_BYTE_Y_TILING(i915) (GRAPHICS_VER(i915) != 2 && \
!(IS_I915G(i915) || IS_I915GM(i915)))
#define SUPPORTS_TV(i915) (INTEL_INFO(i915)->display.supports_tv)
#define I915_HAS_HOTPLUG(i915) (INTEL_INFO(i915)->display.has_hotplug)
#define HAS_FW_BLC(dev_priv) (DISPLAY_VER(dev_priv) > 2)
#define HAS_FBC(dev_priv) (RUNTIME_INFO(dev_priv)->fbc_mask != 0)
#define HAS_CUR_FBC(dev_priv) (!HAS_GMCH(dev_priv) && DISPLAY_VER(dev_priv) >= 7)
#define HAS_FW_BLC(i915) (DISPLAY_VER(i915) > 2)
#define HAS_FBC(i915) (RUNTIME_INFO(i915)->fbc_mask != 0)
#define HAS_CUR_FBC(i915) (!HAS_GMCH(i915) && DISPLAY_VER(i915) >= 7)
#define HAS_DPT(dev_priv) (DISPLAY_VER(dev_priv) >= 13)
#define HAS_DPT(i915) (DISPLAY_VER(i915) >= 13)
#define HAS_IPS(dev_priv) (IS_HSW_ULT(dev_priv) || IS_BROADWELL(dev_priv))
#define HAS_IPS(i915) (IS_HSW_ULT(i915) || IS_BROADWELL(i915))
#define HAS_DP_MST(dev_priv) (INTEL_INFO(dev_priv)->display.has_dp_mst)
#define HAS_DP20(dev_priv) (IS_DG2(dev_priv) || DISPLAY_VER(dev_priv) >= 14)
#define HAS_DP_MST(i915) (INTEL_INFO(i915)->display.has_dp_mst)
#define HAS_DP20(i915) (IS_DG2(i915) || DISPLAY_VER(i915) >= 14)
#define HAS_DOUBLE_BUFFERED_M_N(dev_priv) (DISPLAY_VER(dev_priv) >= 9 || IS_BROADWELL(dev_priv))
#define HAS_DOUBLE_BUFFERED_M_N(i915) (DISPLAY_VER(i915) >= 9 || IS_BROADWELL(i915))
#define HAS_CDCLK_CRAWL(dev_priv) (INTEL_INFO(dev_priv)->display.has_cdclk_crawl)
#define HAS_CDCLK_SQUASH(dev_priv) (INTEL_INFO(dev_priv)->display.has_cdclk_squash)
#define HAS_DDI(dev_priv) (INTEL_INFO(dev_priv)->display.has_ddi)
#define HAS_FPGA_DBG_UNCLAIMED(dev_priv) (INTEL_INFO(dev_priv)->display.has_fpga_dbg)
#define HAS_PSR(dev_priv) (INTEL_INFO(dev_priv)->display.has_psr)
#define HAS_PSR_HW_TRACKING(dev_priv) \
(INTEL_INFO(dev_priv)->display.has_psr_hw_tracking)
#define HAS_PSR2_SEL_FETCH(dev_priv) (DISPLAY_VER(dev_priv) >= 12)
#define HAS_TRANSCODER(dev_priv, trans) ((RUNTIME_INFO(dev_priv)->cpu_transcoder_mask & BIT(trans)) != 0)
#define HAS_CDCLK_CRAWL(i915) (INTEL_INFO(i915)->display.has_cdclk_crawl)
#define HAS_CDCLK_SQUASH(i915) (INTEL_INFO(i915)->display.has_cdclk_squash)
#define HAS_DDI(i915) (INTEL_INFO(i915)->display.has_ddi)
#define HAS_FPGA_DBG_UNCLAIMED(i915) (INTEL_INFO(i915)->display.has_fpga_dbg)
#define HAS_PSR(i915) (INTEL_INFO(i915)->display.has_psr)
#define HAS_PSR_HW_TRACKING(i915) \
(INTEL_INFO(i915)->display.has_psr_hw_tracking)
#define HAS_PSR2_SEL_FETCH(i915) (DISPLAY_VER(i915) >= 12)
#define HAS_TRANSCODER(i915, trans) ((RUNTIME_INFO(i915)->cpu_transcoder_mask & BIT(trans)) != 0)
#define HAS_RC6(dev_priv) (INTEL_INFO(dev_priv)->has_rc6)
#define HAS_RC6p(dev_priv) (INTEL_INFO(dev_priv)->has_rc6p)
#define HAS_RC6pp(dev_priv) (false) /* HW was never validated */
#define HAS_RC6(i915) (INTEL_INFO(i915)->has_rc6)
#define HAS_RC6p(i915) (INTEL_INFO(i915)->has_rc6p)
#define HAS_RC6pp(i915) (false) /* HW was never validated */
#define HAS_RPS(dev_priv) (INTEL_INFO(dev_priv)->has_rps)
#define HAS_RPS(i915) (INTEL_INFO(i915)->has_rps)
#define HAS_DMC(dev_priv) (RUNTIME_INFO(dev_priv)->has_dmc)
#define HAS_DSB(dev_priv) (INTEL_INFO(dev_priv)->display.has_dsb)
#define HAS_DMC(i915) (RUNTIME_INFO(i915)->has_dmc)
#define HAS_DSB(i915) (INTEL_INFO(i915)->display.has_dsb)
#define HAS_DSC(__i915) (RUNTIME_INFO(__i915)->has_dsc)
#define HAS_HW_SAGV_WM(i915) (DISPLAY_VER(i915) >= 13 && !IS_DGFX(i915))
#define HAS_HECI_PXP(dev_priv) \
(INTEL_INFO(dev_priv)->has_heci_pxp)
#define HAS_HECI_PXP(i915) \
(INTEL_INFO(i915)->has_heci_pxp)
#define HAS_HECI_GSCFI(dev_priv) \
(INTEL_INFO(dev_priv)->has_heci_gscfi)
#define HAS_HECI_GSCFI(i915) \
(INTEL_INFO(i915)->has_heci_gscfi)
#define HAS_HECI_GSC(dev_priv) (HAS_HECI_PXP(dev_priv) || HAS_HECI_GSCFI(dev_priv))
#define HAS_HECI_GSC(i915) (HAS_HECI_PXP(i915) || HAS_HECI_GSCFI(i915))
#define HAS_MSO(i915) (DISPLAY_VER(i915) >= 12)
#define HAS_RUNTIME_PM(dev_priv) (INTEL_INFO(dev_priv)->has_runtime_pm)
#define HAS_64BIT_RELOC(dev_priv) (INTEL_INFO(dev_priv)->has_64bit_reloc)
#define HAS_RUNTIME_PM(i915) (INTEL_INFO(i915)->has_runtime_pm)
#define HAS_64BIT_RELOC(i915) (INTEL_INFO(i915)->has_64bit_reloc)
#define HAS_OA_BPC_REPORTING(dev_priv) \
(INTEL_INFO(dev_priv)->has_oa_bpc_reporting)
#define HAS_OA_SLICE_CONTRIB_LIMITS(dev_priv) \
(INTEL_INFO(dev_priv)->has_oa_slice_contrib_limits)
#define HAS_OAM(dev_priv) \
(INTEL_INFO(dev_priv)->has_oam)
#define HAS_OA_BPC_REPORTING(i915) \
(INTEL_INFO(i915)->has_oa_bpc_reporting)
#define HAS_OA_SLICE_CONTRIB_LIMITS(i915) \
(INTEL_INFO(i915)->has_oa_slice_contrib_limits)
#define HAS_OAM(i915) \
(INTEL_INFO(i915)->has_oam)
/*
* Set this flag, when platform requires 64K GTT page sizes or larger for
* device local memory access.
*/
#define HAS_64K_PAGES(dev_priv) (INTEL_INFO(dev_priv)->has_64k_pages)
#define HAS_64K_PAGES(i915) (INTEL_INFO(i915)->has_64k_pages)
#define HAS_IPC(dev_priv) (INTEL_INFO(dev_priv)->display.has_ipc)
#define HAS_SAGV(dev_priv) (DISPLAY_VER(dev_priv) >= 9 && !IS_LP(dev_priv))
#define HAS_IPC(i915) (INTEL_INFO(i915)->display.has_ipc)
#define HAS_SAGV(i915) (DISPLAY_VER(i915) >= 9 && !IS_LP(i915))
#define HAS_REGION(i915, i) (RUNTIME_INFO(i915)->memory_regions & (i))
#define HAS_LMEM(i915) HAS_REGION(i915, REGION_LMEM)
#define HAS_EXTRA_GT_LIST(dev_priv) (INTEL_INFO(dev_priv)->extra_gt_list)
#define HAS_EXTRA_GT_LIST(i915) (INTEL_INFO(i915)->extra_gt_list)
/*
* Platform has the dedicated compression control state for each lmem surfaces
* stored in lmem to support the 3D and media compression formats.
*/
#define HAS_FLAT_CCS(dev_priv) (INTEL_INFO(dev_priv)->has_flat_ccs)
#define HAS_FLAT_CCS(i915) (INTEL_INFO(i915)->has_flat_ccs)
#define HAS_GT_UC(dev_priv) (INTEL_INFO(dev_priv)->has_gt_uc)
#define HAS_GT_UC(i915) (INTEL_INFO(i915)->has_gt_uc)
#define HAS_POOLED_EU(dev_priv) (RUNTIME_INFO(dev_priv)->has_pooled_eu)
#define HAS_POOLED_EU(i915) (RUNTIME_INFO(i915)->has_pooled_eu)
#define HAS_GLOBAL_MOCS_REGISTERS(dev_priv) (INTEL_INFO(dev_priv)->has_global_mocs)
#define HAS_GLOBAL_MOCS_REGISTERS(i915) (INTEL_INFO(i915)->has_global_mocs)
#define HAS_GMCH(dev_priv) (INTEL_INFO(dev_priv)->display.has_gmch)
#define HAS_GMCH(i915) (INTEL_INFO(i915)->display.has_gmch)
#define HAS_GMD_ID(i915) (INTEL_INFO(i915)->has_gmd_id)
#define HAS_LSPCON(dev_priv) (IS_DISPLAY_VER(dev_priv, 9, 10))
#define HAS_LSPCON(i915) (IS_DISPLAY_VER(i915, 9, 10))
#define HAS_L3_CCS_READ(i915) (INTEL_INFO(i915)->has_l3_ccs_read)
/* DPF == dynamic parity feature */
#define HAS_L3_DPF(dev_priv) (INTEL_INFO(dev_priv)->has_l3_dpf)
#define NUM_L3_SLICES(dev_priv) (IS_HSW_GT3(dev_priv) ? \
2 : HAS_L3_DPF(dev_priv))
#define HAS_L3_DPF(i915) (INTEL_INFO(i915)->has_l3_dpf)
#define NUM_L3_SLICES(i915) (IS_HSW_GT3(i915) ? \
2 : HAS_L3_DPF(i915))
#define INTEL_NUM_PIPES(dev_priv) (hweight8(RUNTIME_INFO(dev_priv)->pipe_mask))
#define INTEL_NUM_PIPES(i915) (hweight8(RUNTIME_INFO(i915)->pipe_mask))
#define HAS_DISPLAY(dev_priv) (RUNTIME_INFO(dev_priv)->pipe_mask != 0)
#define HAS_DISPLAY(i915) (RUNTIME_INFO(i915)->pipe_mask != 0)
#define HAS_VRR(i915) (DISPLAY_VER(i915) >= 11)
#define HAS_ASYNC_FLIPS(i915) (DISPLAY_VER(i915) >= 5)
/* Only valid when HAS_DISPLAY() is true */
#define INTEL_DISPLAY_ENABLED(dev_priv) \
(drm_WARN_ON(&(dev_priv)->drm, !HAS_DISPLAY(dev_priv)), \
!(dev_priv)->params.disable_display && \
!intel_opregion_headless_sku(dev_priv))
#define INTEL_DISPLAY_ENABLED(i915) \
(drm_WARN_ON(&(i915)->drm, !HAS_DISPLAY(i915)), \
!(i915)->params.disable_display && \
!intel_opregion_headless_sku(i915))
#define HAS_GUC_DEPRIVILEGE(dev_priv) \
(INTEL_INFO(dev_priv)->has_guc_deprivilege)
#define HAS_GUC_DEPRIVILEGE(i915) \
(INTEL_INFO(i915)->has_guc_deprivilege)
#define HAS_D12_PLANE_MINIMIZATION(dev_priv) (IS_ROCKETLAKE(dev_priv) || \
IS_ALDERLAKE_S(dev_priv))
#define HAS_D12_PLANE_MINIMIZATION(i915) (IS_ROCKETLAKE(i915) || \
IS_ALDERLAKE_S(i915))
#define HAS_MBUS_JOINING(i915) (IS_ALDERLAKE_P(i915) || DISPLAY_VER(i915) >= 14)

View File

@ -420,8 +420,11 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
page_length = remain < page_length ? remain : page_length;
if (drm_mm_node_allocated(&node)) {
ggtt->vm.insert_page(&ggtt->vm,
i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT),
node.start, I915_CACHE_NONE, 0);
i915_gem_object_get_dma_address(obj,
offset >> PAGE_SHIFT),
node.start,
i915_gem_get_pat_index(i915,
I915_CACHE_NONE), 0);
} else {
page_base += offset & PAGE_MASK;
}
@ -598,8 +601,11 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
/* flush the write before we modify the GGTT */
intel_gt_flush_ggtt_writes(ggtt->vm.gt);
ggtt->vm.insert_page(&ggtt->vm,
i915_gem_object_get_dma_address(obj, offset >> PAGE_SHIFT),
node.start, I915_CACHE_NONE, 0);
i915_gem_object_get_dma_address(obj,
offset >> PAGE_SHIFT),
node.start,
i915_gem_get_pat_index(i915,
I915_CACHE_NONE), 0);
wmb(); /* flush modifications to the GGTT (insert_page) */
} else {
page_base += offset & PAGE_MASK;
@ -1142,6 +1148,19 @@ int i915_gem_init(struct drm_i915_private *dev_priv)
unsigned int i;
int ret;
/*
* In the proccess of replacing cache_level with pat_index a tricky
* dependency is created on the definition of the enum i915_cache_level.
* in case this enum is changed, PTE encode would be broken.
* Add a WARNING here. And remove when we completely quit using this
* enum
*/
BUILD_BUG_ON(I915_CACHE_NONE != 0 ||
I915_CACHE_LLC != 1 ||
I915_CACHE_L3_LLC != 2 ||
I915_CACHE_WT != 3 ||
I915_MAX_CACHE_LEVEL != 4);
/* We need to fallback to 4K pages if host doesn't support huge gtt. */
if (intel_vgpu_active(dev_priv) && !intel_vgpu_has_huge_gtt(dev_priv))
RUNTIME_INFO(dev_priv)->page_sizes = I915_GTT_PAGE_SIZE_4K;

View File

@ -5,6 +5,8 @@
#include "gem/i915_gem_mman.h"
#include "gt/intel_engine_user.h"
#include "pxp/intel_pxp.h"
#include "i915_cmd_parser.h"
#include "i915_drv.h"
#include "i915_getparam.h"
@ -102,6 +104,11 @@ int i915_getparam_ioctl(struct drm_device *dev, void *data,
if (value < 0)
return value;
break;
case I915_PARAM_PXP_STATUS:
value = intel_pxp_get_readiness_status(i915->pxp);
if (value < 0)
return value;
break;
case I915_PARAM_MMAP_GTT_VERSION:
/* Though we've started our numbering from 1, and so class all
* earlier versions as 0, in effect their value is undefined as

View File

@ -808,10 +808,15 @@ static void err_print_gt_engines(struct drm_i915_error_state_buf *m,
for (ee = gt->engine; ee; ee = ee->next) {
const struct i915_vma_coredump *vma;
if (ee->guc_capture_node)
intel_guc_capture_print_engine_node(m, ee);
else
if (gt->uc && gt->uc->guc.is_guc_capture) {
if (ee->guc_capture_node)
intel_guc_capture_print_engine_node(m, ee);
else
err_printf(m, " Missing GuC capture node for %s\n",
ee->engine->name);
} else {
error_print_engine(m, ee);
}
err_printf(m, " hung: %u\n", ee->hung);
err_printf(m, " engine reset count: %u\n", ee->reset_count);
@ -1117,10 +1122,14 @@ i915_vma_coredump_create(const struct intel_gt *gt,
mutex_lock(&ggtt->error_mutex);
if (ggtt->vm.raw_insert_page)
ggtt->vm.raw_insert_page(&ggtt->vm, dma, slot,
I915_CACHE_NONE, 0);
i915_gem_get_pat_index(gt->i915,
I915_CACHE_NONE),
0);
else
ggtt->vm.insert_page(&ggtt->vm, dma, slot,
I915_CACHE_NONE, 0);
i915_gem_get_pat_index(gt->i915,
I915_CACHE_NONE),
0);
mb();
s = io_mapping_map_wc(&ggtt->iomap, slot, PAGE_SIZE);
@ -2162,7 +2171,7 @@ void i915_error_state_store(struct i915_gpu_coredump *error)
* i915_capture_error_state - capture an error record for later analysis
* @gt: intel_gt which originated the hang
* @engine_mask: hung engines
*
* @dump_flags: dump flags
*
* Should be called when an error is detected (either a hang or an error
* interrupt) to capture error state from the time of the error. Fills
@ -2219,3 +2228,135 @@ void i915_disable_error_state(struct drm_i915_private *i915, int err)
i915->gpu_error.first_error = ERR_PTR(err);
spin_unlock_irq(&i915->gpu_error.lock);
}
#if IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)
void intel_klog_error_capture(struct intel_gt *gt,
intel_engine_mask_t engine_mask)
{
static int g_count;
struct drm_i915_private *i915 = gt->i915;
struct i915_gpu_coredump *error;
intel_wakeref_t wakeref;
size_t buf_size = PAGE_SIZE * 128;
size_t pos_err;
char *buf, *ptr, *next;
int l_count = g_count++;
int line = 0;
/* Can't allocate memory during a reset */
if (test_bit(I915_RESET_BACKOFF, &gt->reset.flags)) {
drm_err(&gt->i915->drm, "[Capture/%d.%d] Inside GT reset, skipping error capture :(\n",
l_count, line++);
return;
}
error = READ_ONCE(i915->gpu_error.first_error);
if (error) {
drm_err(&i915->drm, "[Capture/%d.%d] Clearing existing error capture first...\n",
l_count, line++);
i915_reset_error_state(i915);
}
with_intel_runtime_pm(&i915->runtime_pm, wakeref)
error = i915_gpu_coredump(gt, engine_mask, CORE_DUMP_FLAG_NONE);
if (IS_ERR(error)) {
drm_err(&i915->drm, "[Capture/%d.%d] Failed to capture error capture: %ld!\n",
l_count, line++, PTR_ERR(error));
return;
}
buf = kvmalloc(buf_size, GFP_KERNEL);
if (!buf) {
drm_err(&i915->drm, "[Capture/%d.%d] Failed to allocate buffer for error capture!\n",
l_count, line++);
i915_gpu_coredump_put(error);
return;
}
drm_info(&i915->drm, "[Capture/%d.%d] Dumping i915 error capture for %ps...\n",
l_count, line++, __builtin_return_address(0));
/* Largest string length safe to print via dmesg */
# define MAX_CHUNK 800
pos_err = 0;
while (1) {
ssize_t got = i915_gpu_coredump_copy_to_buffer(error, buf, pos_err, buf_size - 1);
if (got <= 0)
break;
buf[got] = 0;
pos_err += got;
ptr = buf;
while (got > 0) {
size_t count;
char tag[2];
next = strnchr(ptr, got, '\n');
if (next) {
count = next - ptr;
*next = 0;
tag[0] = '>';
tag[1] = '<';
} else {
count = got;
tag[0] = '}';
tag[1] = '{';
}
if (count > MAX_CHUNK) {
size_t pos;
char *ptr2 = ptr;
for (pos = MAX_CHUNK; pos < count; pos += MAX_CHUNK) {
char chr = ptr[pos];
ptr[pos] = 0;
drm_info(&i915->drm, "[Capture/%d.%d] }%s{\n",
l_count, line++, ptr2);
ptr[pos] = chr;
ptr2 = ptr + pos;
/*
* If spewing large amounts of data via a serial console,
* this can be a very slow process. So be friendly and try
* not to cause 'softlockup on CPU' problems.
*/
cond_resched();
}
if (ptr2 < (ptr + count))
drm_info(&i915->drm, "[Capture/%d.%d] %c%s%c\n",
l_count, line++, tag[0], ptr2, tag[1]);
else if (tag[0] == '>')
drm_info(&i915->drm, "[Capture/%d.%d] ><\n",
l_count, line++);
} else {
drm_info(&i915->drm, "[Capture/%d.%d] %c%s%c\n",
l_count, line++, tag[0], ptr, tag[1]);
}
ptr = next;
got -= count;
if (next) {
ptr++;
got--;
}
/* As above. */
cond_resched();
}
if (got)
drm_info(&i915->drm, "[Capture/%d.%d] Got %zd bytes remaining!\n",
l_count, line++, got);
}
kvfree(buf);
drm_info(&i915->drm, "[Capture/%d.%d] Dumped %zd bytes\n", l_count, line++, pos_err);
}
#endif

View File

@ -258,6 +258,16 @@ static inline u32 i915_reset_engine_count(struct i915_gpu_error *error,
#define CORE_DUMP_FLAG_NONE 0x0
#define CORE_DUMP_FLAG_IS_GUC_CAPTURE BIT(0)
#if IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR) && IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM)
void intel_klog_error_capture(struct intel_gt *gt,
intel_engine_mask_t engine_mask);
#else
static inline void intel_klog_error_capture(struct intel_gt *gt,
intel_engine_mask_t engine_mask)
{
}
#endif
#if IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR)
__printf(2, 3)

View File

@ -50,6 +50,8 @@ struct hwm_drvdata {
struct hwm_energy_info ei; /* Energy info for energy1_input */
char name[12];
int gt_n;
bool reset_in_progress;
wait_queue_head_t waitq;
};
struct i915_hwmon {
@ -396,31 +398,56 @@ hwm_power_max_write(struct hwm_drvdata *ddat, long val)
{
struct i915_hwmon *hwmon = ddat->hwmon;
intel_wakeref_t wakeref;
DEFINE_WAIT(wait);
int ret = 0;
u32 nval;
/* Block waiting for GuC reset to complete when needed */
for (;;) {
mutex_lock(&hwmon->hwmon_lock);
prepare_to_wait(&ddat->waitq, &wait, TASK_INTERRUPTIBLE);
if (!hwmon->ddat.reset_in_progress)
break;
if (signal_pending(current)) {
ret = -EINTR;
break;
}
mutex_unlock(&hwmon->hwmon_lock);
schedule();
}
finish_wait(&ddat->waitq, &wait);
if (ret)
goto unlock;
wakeref = intel_runtime_pm_get(ddat->uncore->rpm);
/* Disable PL1 limit and verify, because the limit cannot be disabled on all platforms */
if (val == PL1_DISABLE) {
mutex_lock(&hwmon->hwmon_lock);
with_intel_runtime_pm(ddat->uncore->rpm, wakeref) {
intel_uncore_rmw(ddat->uncore, hwmon->rg.pkg_rapl_limit,
PKG_PWR_LIM_1_EN, 0);
nval = intel_uncore_read(ddat->uncore, hwmon->rg.pkg_rapl_limit);
}
mutex_unlock(&hwmon->hwmon_lock);
intel_uncore_rmw(ddat->uncore, hwmon->rg.pkg_rapl_limit,
PKG_PWR_LIM_1_EN, 0);
nval = intel_uncore_read(ddat->uncore, hwmon->rg.pkg_rapl_limit);
if (nval & PKG_PWR_LIM_1_EN)
return -ENODEV;
return 0;
ret = -ENODEV;
goto exit;
}
/* Computation in 64-bits to avoid overflow. Round to nearest. */
nval = DIV_ROUND_CLOSEST_ULL((u64)val << hwmon->scl_shift_power, SF_POWER);
nval = PKG_PWR_LIM_1_EN | REG_FIELD_PREP(PKG_PWR_LIM_1, nval);
hwm_locked_with_pm_intel_uncore_rmw(ddat, hwmon->rg.pkg_rapl_limit,
PKG_PWR_LIM_1_EN | PKG_PWR_LIM_1,
nval);
return 0;
intel_uncore_rmw(ddat->uncore, hwmon->rg.pkg_rapl_limit,
PKG_PWR_LIM_1_EN | PKG_PWR_LIM_1, nval);
exit:
intel_runtime_pm_put(ddat->uncore->rpm, wakeref);
unlock:
mutex_unlock(&hwmon->hwmon_lock);
return ret;
}
static int
@ -470,6 +497,41 @@ hwm_power_write(struct hwm_drvdata *ddat, u32 attr, int chan, long val)
}
}
void i915_hwmon_power_max_disable(struct drm_i915_private *i915, bool *old)
{
struct i915_hwmon *hwmon = i915->hwmon;
u32 r;
if (!hwmon || !i915_mmio_reg_valid(hwmon->rg.pkg_rapl_limit))
return;
mutex_lock(&hwmon->hwmon_lock);
hwmon->ddat.reset_in_progress = true;
r = intel_uncore_rmw(hwmon->ddat.uncore, hwmon->rg.pkg_rapl_limit,
PKG_PWR_LIM_1_EN, 0);
*old = !!(r & PKG_PWR_LIM_1_EN);
mutex_unlock(&hwmon->hwmon_lock);
}
void i915_hwmon_power_max_restore(struct drm_i915_private *i915, bool old)
{
struct i915_hwmon *hwmon = i915->hwmon;
if (!hwmon || !i915_mmio_reg_valid(hwmon->rg.pkg_rapl_limit))
return;
mutex_lock(&hwmon->hwmon_lock);
intel_uncore_rmw(hwmon->ddat.uncore, hwmon->rg.pkg_rapl_limit,
PKG_PWR_LIM_1_EN, old ? PKG_PWR_LIM_1_EN : 0);
hwmon->ddat.reset_in_progress = false;
wake_up_all(&hwmon->ddat.waitq);
mutex_unlock(&hwmon->hwmon_lock);
}
static umode_t
hwm_energy_is_visible(const struct hwm_drvdata *ddat, u32 attr)
{
@ -742,6 +804,7 @@ void i915_hwmon_register(struct drm_i915_private *i915)
ddat->uncore = &i915->uncore;
snprintf(ddat->name, sizeof(ddat->name), "i915");
ddat->gt_n = -1;
init_waitqueue_head(&ddat->waitq);
for_each_gt(gt, i915, i) {
ddat_gt = hwmon->ddat_gt + i;

View File

@ -7,14 +7,21 @@
#ifndef __I915_HWMON_H__
#define __I915_HWMON_H__
#include <linux/types.h>
struct drm_i915_private;
struct intel_gt;
#if IS_REACHABLE(CONFIG_HWMON)
void i915_hwmon_register(struct drm_i915_private *i915);
void i915_hwmon_unregister(struct drm_i915_private *i915);
void i915_hwmon_power_max_disable(struct drm_i915_private *i915, bool *old);
void i915_hwmon_power_max_restore(struct drm_i915_private *i915, bool old);
#else
static inline void i915_hwmon_register(struct drm_i915_private *i915) { };
static inline void i915_hwmon_unregister(struct drm_i915_private *i915) { };
static inline void i915_hwmon_power_max_disable(struct drm_i915_private *i915, bool *old) { };
static inline void i915_hwmon_power_max_restore(struct drm_i915_private *i915, bool old) { };
#endif
#endif /* __I915_HWMON_H__ */

View File

@ -2762,12 +2762,15 @@ static void gen11_irq_reset(struct drm_i915_private *dev_priv)
static void dg1_irq_reset(struct drm_i915_private *dev_priv)
{
struct intel_gt *gt = to_gt(dev_priv);
struct intel_uncore *uncore = gt->uncore;
struct intel_uncore *uncore = &dev_priv->uncore;
struct intel_gt *gt;
unsigned int i;
dg1_master_intr_disable(dev_priv->uncore.regs);
gen11_gt_irq_reset(gt);
for_each_gt(gt, dev_priv, i)
gen11_gt_irq_reset(gt);
gen11_display_irq_reset(dev_priv);
GEN3_IRQ_RESET(uncore, GEN11_GU_MISC_);
@ -3425,11 +3428,13 @@ static void gen11_irq_postinstall(struct drm_i915_private *dev_priv)
static void dg1_irq_postinstall(struct drm_i915_private *dev_priv)
{
struct intel_gt *gt = to_gt(dev_priv);
struct intel_uncore *uncore = gt->uncore;
struct intel_uncore *uncore = &dev_priv->uncore;
u32 gu_misc_masked = GEN11_GU_MISC_GSE;
struct intel_gt *gt;
unsigned int i;
gen11_gt_irq_postinstall(gt);
for_each_gt(gt, dev_priv, i)
gen11_gt_irq_postinstall(gt);
GEN3_IRQ_INIT(uncore, GEN11_GU_MISC_, ~gu_misc_masked, gu_misc_masked);

View File

@ -29,6 +29,7 @@
#include "display/intel_display.h"
#include "gt/intel_gt_regs.h"
#include "gt/intel_sa_media.h"
#include "gem/i915_gem_object_types.h"
#include "i915_driver.h"
#include "i915_drv.h"
@ -163,6 +164,38 @@
.gamma_lut_tests = DRM_COLOR_LUT_NON_DECREASING, \
}
#define LEGACY_CACHELEVEL \
.cachelevel_to_pat = { \
[I915_CACHE_NONE] = 0, \
[I915_CACHE_LLC] = 1, \
[I915_CACHE_L3_LLC] = 2, \
[I915_CACHE_WT] = 3, \
}
#define TGL_CACHELEVEL \
.cachelevel_to_pat = { \
[I915_CACHE_NONE] = 3, \
[I915_CACHE_LLC] = 0, \
[I915_CACHE_L3_LLC] = 0, \
[I915_CACHE_WT] = 2, \
}
#define PVC_CACHELEVEL \
.cachelevel_to_pat = { \
[I915_CACHE_NONE] = 0, \
[I915_CACHE_LLC] = 3, \
[I915_CACHE_L3_LLC] = 3, \
[I915_CACHE_WT] = 2, \
}
#define MTL_CACHELEVEL \
.cachelevel_to_pat = { \
[I915_CACHE_NONE] = 2, \
[I915_CACHE_LLC] = 3, \
[I915_CACHE_L3_LLC] = 3, \
[I915_CACHE_WT] = 1, \
}
/* Keep in gen based order, and chronological order within a gen */
#define GEN_DEFAULT_PAGE_SIZES \
@ -188,11 +221,13 @@
.has_snoop = true, \
.has_coherent_ggtt = false, \
.dma_mask_size = 32, \
.max_pat_index = 3, \
I9XX_PIPE_OFFSETS, \
I9XX_CURSOR_OFFSETS, \
I9XX_COLORS, \
GEN_DEFAULT_PAGE_SIZES, \
GEN_DEFAULT_REGIONS
GEN_DEFAULT_REGIONS, \
LEGACY_CACHELEVEL
#define I845_FEATURES \
GEN(2), \
@ -209,11 +244,13 @@
.has_snoop = true, \
.has_coherent_ggtt = false, \
.dma_mask_size = 32, \
.max_pat_index = 3, \
I845_PIPE_OFFSETS, \
I845_CURSOR_OFFSETS, \
I845_COLORS, \
GEN_DEFAULT_PAGE_SIZES, \
GEN_DEFAULT_REGIONS
GEN_DEFAULT_REGIONS, \
LEGACY_CACHELEVEL
static const struct intel_device_info i830_info = {
I830_FEATURES,
@ -248,11 +285,13 @@ static const struct intel_device_info i865g_info = {
.has_snoop = true, \
.has_coherent_ggtt = true, \
.dma_mask_size = 32, \
.max_pat_index = 3, \
I9XX_PIPE_OFFSETS, \
I9XX_CURSOR_OFFSETS, \
I9XX_COLORS, \
GEN_DEFAULT_PAGE_SIZES, \
GEN_DEFAULT_REGIONS
GEN_DEFAULT_REGIONS, \
LEGACY_CACHELEVEL
static const struct intel_device_info i915g_info = {
GEN3_FEATURES,
@ -340,11 +379,13 @@ static const struct intel_device_info pnv_m_info = {
.has_snoop = true, \
.has_coherent_ggtt = true, \
.dma_mask_size = 36, \
.max_pat_index = 3, \
I9XX_PIPE_OFFSETS, \
I9XX_CURSOR_OFFSETS, \
I9XX_COLORS, \
GEN_DEFAULT_PAGE_SIZES, \
GEN_DEFAULT_REGIONS
GEN_DEFAULT_REGIONS, \
LEGACY_CACHELEVEL
static const struct intel_device_info i965g_info = {
GEN4_FEATURES,
@ -394,11 +435,13 @@ static const struct intel_device_info gm45_info = {
/* ilk does support rc6, but we do not implement [power] contexts */ \
.has_rc6 = 0, \
.dma_mask_size = 36, \
.max_pat_index = 3, \
I9XX_PIPE_OFFSETS, \
I9XX_CURSOR_OFFSETS, \
ILK_COLORS, \
GEN_DEFAULT_PAGE_SIZES, \
GEN_DEFAULT_REGIONS
GEN_DEFAULT_REGIONS, \
LEGACY_CACHELEVEL
static const struct intel_device_info ilk_d_info = {
GEN5_FEATURES,
@ -428,13 +471,15 @@ static const struct intel_device_info ilk_m_info = {
.has_rc6p = 0, \
.has_rps = true, \
.dma_mask_size = 40, \
.max_pat_index = 3, \
.__runtime.ppgtt_type = INTEL_PPGTT_ALIASING, \
.__runtime.ppgtt_size = 31, \
I9XX_PIPE_OFFSETS, \
I9XX_CURSOR_OFFSETS, \
ILK_COLORS, \
GEN_DEFAULT_PAGE_SIZES, \
GEN_DEFAULT_REGIONS
GEN_DEFAULT_REGIONS, \
LEGACY_CACHELEVEL
#define SNB_D_PLATFORM \
GEN6_FEATURES, \
@ -481,13 +526,15 @@ static const struct intel_device_info snb_m_gt2_info = {
.has_reset_engine = true, \
.has_rps = true, \
.dma_mask_size = 40, \
.max_pat_index = 3, \
.__runtime.ppgtt_type = INTEL_PPGTT_ALIASING, \
.__runtime.ppgtt_size = 31, \
IVB_PIPE_OFFSETS, \
IVB_CURSOR_OFFSETS, \
IVB_COLORS, \
GEN_DEFAULT_PAGE_SIZES, \
GEN_DEFAULT_REGIONS
GEN_DEFAULT_REGIONS, \
LEGACY_CACHELEVEL
#define IVB_D_PLATFORM \
GEN7_FEATURES, \
@ -541,6 +588,7 @@ static const struct intel_device_info vlv_info = {
.display.has_gmch = 1,
.display.has_hotplug = 1,
.dma_mask_size = 40,
.max_pat_index = 3,
.__runtime.ppgtt_type = INTEL_PPGTT_ALIASING,
.__runtime.ppgtt_size = 31,
.has_snoop = true,
@ -552,6 +600,7 @@ static const struct intel_device_info vlv_info = {
I9XX_COLORS,
GEN_DEFAULT_PAGE_SIZES,
GEN_DEFAULT_REGIONS,
LEGACY_CACHELEVEL,
};
#define G75_FEATURES \
@ -639,6 +688,7 @@ static const struct intel_device_info chv_info = {
.has_logical_ring_contexts = 1,
.display.has_gmch = 1,
.dma_mask_size = 39,
.max_pat_index = 3,
.__runtime.ppgtt_type = INTEL_PPGTT_FULL,
.__runtime.ppgtt_size = 32,
.has_reset_engine = 1,
@ -650,6 +700,7 @@ static const struct intel_device_info chv_info = {
CHV_COLORS,
GEN_DEFAULT_PAGE_SIZES,
GEN_DEFAULT_REGIONS,
LEGACY_CACHELEVEL,
};
#define GEN9_DEFAULT_PAGE_SIZES \
@ -731,11 +782,13 @@ static const struct intel_device_info skl_gt4_info = {
.has_snoop = true, \
.has_coherent_ggtt = false, \
.display.has_ipc = 1, \
.max_pat_index = 3, \
HSW_PIPE_OFFSETS, \
IVB_CURSOR_OFFSETS, \
IVB_COLORS, \
GEN9_DEFAULT_PAGE_SIZES, \
GEN_DEFAULT_REGIONS
GEN_DEFAULT_REGIONS, \
LEGACY_CACHELEVEL
static const struct intel_device_info bxt_info = {
GEN9_LP_FEATURES,
@ -889,9 +942,11 @@ static const struct intel_device_info jsl_info = {
[TRANSCODER_DSI_1] = TRANSCODER_DSI1_OFFSET, \
}, \
TGL_CURSOR_OFFSETS, \
TGL_CACHELEVEL, \
.has_global_mocs = 1, \
.has_pxp = 1, \
.display.has_dsb = 1
.display.has_dsb = 1, \
.max_pat_index = 3
static const struct intel_device_info tgl_info = {
GEN12_FEATURES,
@ -1013,6 +1068,7 @@ static const struct intel_device_info adl_p_info = {
.__runtime.graphics.ip.ver = 12, \
.__runtime.graphics.ip.rel = 50, \
XE_HP_PAGE_SIZES, \
TGL_CACHELEVEL, \
.dma_mask_size = 46, \
.has_3d_pipeline = 1, \
.has_64bit_reloc = 1, \
@ -1031,6 +1087,7 @@ static const struct intel_device_info adl_p_info = {
.has_reset_engine = 1, \
.has_rps = 1, \
.has_runtime_pm = 1, \
.max_pat_index = 3, \
.__runtime.ppgtt_size = 48, \
.__runtime.ppgtt_type = INTEL_PPGTT_FULL
@ -1107,11 +1164,13 @@ static const struct intel_device_info pvc_info = {
PLATFORM(INTEL_PONTEVECCHIO),
NO_DISPLAY,
.has_flat_ccs = 0,
.max_pat_index = 7,
.__runtime.platform_engine_mask =
BIT(BCS0) |
BIT(VCS0) |
BIT(CCS0) | BIT(CCS1) | BIT(CCS2) | BIT(CCS3),
.require_force_probe = 1,
PVC_CACHELEVEL,
};
#define XE_LPDP_FEATURES \
@ -1148,11 +1207,15 @@ static const struct intel_device_info mtl_info = {
.has_flat_ccs = 0,
.has_gmd_id = 1,
.has_guc_deprivilege = 1,
.has_llc = 0,
.has_mslice_steering = 0,
.has_snoop = 1,
.max_pat_index = 4,
.has_pxp = 1,
.__runtime.memory_regions = REGION_SMEM | REGION_STOLEN_LMEM,
.__runtime.platform_engine_mask = BIT(RCS0) | BIT(BCS0) | BIT(CCS0),
.require_force_probe = 1,
MTL_CACHELEVEL,
};
#undef PLATFORM

View File

@ -5300,6 +5300,7 @@ void i915_perf_fini(struct drm_i915_private *i915)
/**
* i915_perf_ioctl_version - Version of the i915-perf subsystem
* @i915: The i915 device
*
* This version number is used by userspace to detect available features.
*/

View File

@ -134,10 +134,6 @@
#define GDT_CHICKEN_BITS _MMIO(0x9840)
#define GT_NOA_ENABLE 0x00000080
#define GEN12_SQCNT1 _MMIO(0x8718)
#define GEN12_SQCNT1_PMON_ENABLE REG_BIT(30)
#define GEN12_SQCNT1_OABPC REG_BIT(29)
/* Gen12 OAM unit */
#define GEN12_OAM_HEAD_POINTER_OFFSET (0x1a0)
#define GEN12_OAM_HEAD_POINTER_MASK 0xffffffc0

View File

@ -10,6 +10,7 @@
#include "gt/intel_engine_pm.h"
#include "gt/intel_engine_regs.h"
#include "gt/intel_engine_user.h"
#include "gt/intel_gt.h"
#include "gt/intel_gt_pm.h"
#include "gt/intel_gt_regs.h"
#include "gt/intel_rc6.h"
@ -50,16 +51,26 @@ static u8 engine_event_instance(struct perf_event *event)
return (event->attr.config >> I915_PMU_SAMPLE_BITS) & 0xff;
}
static bool is_engine_config(u64 config)
static bool is_engine_config(const u64 config)
{
return config < __I915_PMU_OTHER(0);
}
static unsigned int config_gt_id(const u64 config)
{
return config >> __I915_PMU_GT_SHIFT;
}
static u64 config_counter(const u64 config)
{
return config & ~(~0ULL << __I915_PMU_GT_SHIFT);
}
static unsigned int other_bit(const u64 config)
{
unsigned int val;
switch (config) {
switch (config_counter(config)) {
case I915_PMU_ACTUAL_FREQUENCY:
val = __I915_PMU_ACTUAL_FREQUENCY_ENABLED;
break;
@ -77,7 +88,9 @@ static unsigned int other_bit(const u64 config)
return -1;
}
return I915_ENGINE_SAMPLE_COUNT + val;
return I915_ENGINE_SAMPLE_COUNT +
config_gt_id(config) * __I915_PMU_TRACKED_EVENT_COUNT +
val;
}
static unsigned int config_bit(const u64 config)
@ -88,9 +101,20 @@ static unsigned int config_bit(const u64 config)
return other_bit(config);
}
static u64 config_mask(u64 config)
static u32 config_mask(const u64 config)
{
return BIT_ULL(config_bit(config));
unsigned int bit = config_bit(config);
if (__builtin_constant_p(config))
BUILD_BUG_ON(bit >
BITS_PER_TYPE(typeof_member(struct i915_pmu,
enable)) - 1);
else
WARN_ON_ONCE(bit >
BITS_PER_TYPE(typeof_member(struct i915_pmu,
enable)) - 1);
return BIT(config_bit(config));
}
static bool is_engine_event(struct perf_event *event)
@ -103,6 +127,18 @@ static unsigned int event_bit(struct perf_event *event)
return config_bit(event->attr.config);
}
static u32 frequency_enabled_mask(void)
{
unsigned int i;
u32 mask = 0;
for (i = 0; i < I915_PMU_MAX_GTS; i++)
mask |= config_mask(__I915_PMU_ACTUAL_FREQUENCY(i)) |
config_mask(__I915_PMU_REQUESTED_FREQUENCY(i));
return mask;
}
static bool pmu_needs_timer(struct i915_pmu *pmu, bool gpu_active)
{
struct drm_i915_private *i915 = container_of(pmu, typeof(*i915), pmu);
@ -119,9 +155,7 @@ static bool pmu_needs_timer(struct i915_pmu *pmu, bool gpu_active)
* Mask out all the ones which do not need the timer, or in
* other words keep all the ones that could need the timer.
*/
enable &= config_mask(I915_PMU_ACTUAL_FREQUENCY) |
config_mask(I915_PMU_REQUESTED_FREQUENCY) |
ENGINE_SAMPLE_MASK;
enable &= frequency_enabled_mask() | ENGINE_SAMPLE_MASK;
/*
* When the GPU is idle per-engine counters do not need to be
@ -163,9 +197,37 @@ static inline s64 ktime_since_raw(const ktime_t kt)
return ktime_to_ns(ktime_sub(ktime_get_raw(), kt));
}
static unsigned int
__sample_idx(struct i915_pmu *pmu, unsigned int gt_id, int sample)
{
unsigned int idx = gt_id * __I915_NUM_PMU_SAMPLERS + sample;
GEM_BUG_ON(idx >= ARRAY_SIZE(pmu->sample));
return idx;
}
static u64 read_sample(struct i915_pmu *pmu, unsigned int gt_id, int sample)
{
return pmu->sample[__sample_idx(pmu, gt_id, sample)].cur;
}
static void
store_sample(struct i915_pmu *pmu, unsigned int gt_id, int sample, u64 val)
{
pmu->sample[__sample_idx(pmu, gt_id, sample)].cur = val;
}
static void
add_sample_mult(struct i915_pmu *pmu, unsigned int gt_id, int sample, u32 val, u32 mul)
{
pmu->sample[__sample_idx(pmu, gt_id, sample)].cur += mul_u32_u32(val, mul);
}
static u64 get_rc6(struct intel_gt *gt)
{
struct drm_i915_private *i915 = gt->i915;
const unsigned int gt_id = gt->info.id;
struct i915_pmu *pmu = &i915->pmu;
unsigned long flags;
bool awake = false;
@ -180,7 +242,7 @@ static u64 get_rc6(struct intel_gt *gt)
spin_lock_irqsave(&pmu->lock, flags);
if (awake) {
pmu->sample[__I915_SAMPLE_RC6].cur = val;
store_sample(pmu, gt_id, __I915_SAMPLE_RC6, val);
} else {
/*
* We think we are runtime suspended.
@ -189,14 +251,14 @@ static u64 get_rc6(struct intel_gt *gt)
* on top of the last known real value, as the approximated RC6
* counter value.
*/
val = ktime_since_raw(pmu->sleep_last);
val += pmu->sample[__I915_SAMPLE_RC6].cur;
val = ktime_since_raw(pmu->sleep_last[gt_id]);
val += read_sample(pmu, gt_id, __I915_SAMPLE_RC6);
}
if (val < pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur)
val = pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur;
if (val < read_sample(pmu, gt_id, __I915_SAMPLE_RC6_LAST_REPORTED))
val = read_sample(pmu, gt_id, __I915_SAMPLE_RC6_LAST_REPORTED);
else
pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur = val;
store_sample(pmu, gt_id, __I915_SAMPLE_RC6_LAST_REPORTED, val);
spin_unlock_irqrestore(&pmu->lock, flags);
@ -206,22 +268,29 @@ static u64 get_rc6(struct intel_gt *gt)
static void init_rc6(struct i915_pmu *pmu)
{
struct drm_i915_private *i915 = container_of(pmu, typeof(*i915), pmu);
intel_wakeref_t wakeref;
struct intel_gt *gt;
unsigned int i;
with_intel_runtime_pm(to_gt(i915)->uncore->rpm, wakeref) {
pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(to_gt(i915));
pmu->sample[__I915_SAMPLE_RC6_LAST_REPORTED].cur =
pmu->sample[__I915_SAMPLE_RC6].cur;
pmu->sleep_last = ktime_get_raw();
for_each_gt(gt, i915, i) {
intel_wakeref_t wakeref;
with_intel_runtime_pm(gt->uncore->rpm, wakeref) {
u64 val = __get_rc6(gt);
store_sample(pmu, i, __I915_SAMPLE_RC6, val);
store_sample(pmu, i, __I915_SAMPLE_RC6_LAST_REPORTED,
val);
pmu->sleep_last[i] = ktime_get_raw();
}
}
}
static void park_rc6(struct drm_i915_private *i915)
static void park_rc6(struct intel_gt *gt)
{
struct i915_pmu *pmu = &i915->pmu;
struct i915_pmu *pmu = &gt->i915->pmu;
pmu->sample[__I915_SAMPLE_RC6].cur = __get_rc6(to_gt(i915));
pmu->sleep_last = ktime_get_raw();
store_sample(pmu, gt->info.id, __I915_SAMPLE_RC6, __get_rc6(gt));
pmu->sleep_last[gt->info.id] = ktime_get_raw();
}
static void __i915_pmu_maybe_start_timer(struct i915_pmu *pmu)
@ -235,29 +304,31 @@ static void __i915_pmu_maybe_start_timer(struct i915_pmu *pmu)
}
}
void i915_pmu_gt_parked(struct drm_i915_private *i915)
void i915_pmu_gt_parked(struct intel_gt *gt)
{
struct i915_pmu *pmu = &i915->pmu;
struct i915_pmu *pmu = &gt->i915->pmu;
if (!pmu->base.event_init)
return;
spin_lock_irq(&pmu->lock);
park_rc6(i915);
park_rc6(gt);
/*
* Signal sampling timer to stop if only engine events are enabled and
* GPU went idle.
*/
pmu->timer_enabled = pmu_needs_timer(pmu, false);
pmu->unparked &= ~BIT(gt->info.id);
if (pmu->unparked == 0)
pmu->timer_enabled = pmu_needs_timer(pmu, false);
spin_unlock_irq(&pmu->lock);
}
void i915_pmu_gt_unparked(struct drm_i915_private *i915)
void i915_pmu_gt_unparked(struct intel_gt *gt)
{
struct i915_pmu *pmu = &i915->pmu;
struct i915_pmu *pmu = &gt->i915->pmu;
if (!pmu->base.event_init)
return;
@ -267,7 +338,10 @@ void i915_pmu_gt_unparked(struct drm_i915_private *i915)
/*
* Re-enable sampling timer when GPU goes active.
*/
__i915_pmu_maybe_start_timer(pmu);
if (pmu->unparked == 0)
__i915_pmu_maybe_start_timer(pmu);
pmu->unparked |= BIT(gt->info.id);
spin_unlock_irq(&pmu->lock);
}
@ -338,6 +412,9 @@ engines_sample(struct intel_gt *gt, unsigned int period_ns)
return;
for_each_engine(engine, gt, id) {
if (!engine->pmu.enable)
continue;
if (!intel_engine_pm_get_if_awake(engine))
continue;
@ -353,34 +430,30 @@ engines_sample(struct intel_gt *gt, unsigned int period_ns)
}
}
static void
add_sample_mult(struct i915_pmu_sample *sample, u32 val, u32 mul)
{
sample->cur += mul_u32_u32(val, mul);
}
static bool frequency_sampling_enabled(struct i915_pmu *pmu)
static bool
frequency_sampling_enabled(struct i915_pmu *pmu, unsigned int gt)
{
return pmu->enable &
(config_mask(I915_PMU_ACTUAL_FREQUENCY) |
config_mask(I915_PMU_REQUESTED_FREQUENCY));
(config_mask(__I915_PMU_ACTUAL_FREQUENCY(gt)) |
config_mask(__I915_PMU_REQUESTED_FREQUENCY(gt)));
}
static void
frequency_sample(struct intel_gt *gt, unsigned int period_ns)
{
struct drm_i915_private *i915 = gt->i915;
const unsigned int gt_id = gt->info.id;
struct i915_pmu *pmu = &i915->pmu;
struct intel_rps *rps = &gt->rps;
if (!frequency_sampling_enabled(pmu))
if (!frequency_sampling_enabled(pmu, gt_id))
return;
/* Report 0/0 (actual/requested) frequency while parked. */
if (!intel_gt_pm_get_if_awake(gt))
return;
if (pmu->enable & config_mask(I915_PMU_ACTUAL_FREQUENCY)) {
if (pmu->enable & config_mask(__I915_PMU_ACTUAL_FREQUENCY(gt_id))) {
u32 val;
/*
@ -396,12 +469,12 @@ frequency_sample(struct intel_gt *gt, unsigned int period_ns)
if (!val)
val = intel_gpu_freq(rps, rps->cur_freq);
add_sample_mult(&pmu->sample[__I915_SAMPLE_FREQ_ACT],
add_sample_mult(pmu, gt_id, __I915_SAMPLE_FREQ_ACT,
val, period_ns / 1000);
}
if (pmu->enable & config_mask(I915_PMU_REQUESTED_FREQUENCY)) {
add_sample_mult(&pmu->sample[__I915_SAMPLE_FREQ_REQ],
if (pmu->enable & config_mask(__I915_PMU_REQUESTED_FREQUENCY(gt_id))) {
add_sample_mult(pmu, gt_id, __I915_SAMPLE_FREQ_REQ,
intel_rps_get_requested_frequency(rps),
period_ns / 1000);
}
@ -414,8 +487,9 @@ static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer)
struct drm_i915_private *i915 =
container_of(hrtimer, struct drm_i915_private, pmu.timer);
struct i915_pmu *pmu = &i915->pmu;
struct intel_gt *gt = to_gt(i915);
unsigned int period_ns;
struct intel_gt *gt;
unsigned int i;
ktime_t now;
if (!READ_ONCE(pmu->timer_enabled))
@ -431,8 +505,14 @@ static enum hrtimer_restart i915_sample(struct hrtimer *hrtimer)
* grabbing the forcewake. However the potential error from timer call-
* back delay greatly dominates this so we keep it simple.
*/
engines_sample(gt, period_ns);
frequency_sample(gt, period_ns);
for_each_gt(gt, i915, i) {
if (!(pmu->unparked & BIT(i)))
continue;
engines_sample(gt, period_ns);
frequency_sample(gt, period_ns);
}
hrtimer_forward(hrtimer, now, ns_to_ktime(PERIOD));
@ -473,7 +553,13 @@ config_status(struct drm_i915_private *i915, u64 config)
{
struct intel_gt *gt = to_gt(i915);
switch (config) {
unsigned int gt_id = config_gt_id(config);
unsigned int max_gt_id = HAS_EXTRA_GT_LIST(i915) ? 1 : 0;
if (gt_id > max_gt_id)
return -ENOENT;
switch (config_counter(config)) {
case I915_PMU_ACTUAL_FREQUENCY:
if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915))
/* Requires a mutex for sampling! */
@ -484,6 +570,8 @@ config_status(struct drm_i915_private *i915, u64 config)
return -ENODEV;
break;
case I915_PMU_INTERRUPTS:
if (gt_id)
return -ENOENT;
break;
case I915_PMU_RC6_RESIDENCY:
if (!gt->rc6.supported)
@ -581,22 +669,27 @@ static u64 __i915_pmu_event_read(struct perf_event *event)
val = engine->pmu.sample[sample].cur;
}
} else {
switch (event->attr.config) {
const unsigned int gt_id = config_gt_id(event->attr.config);
const u64 config = config_counter(event->attr.config);
switch (config) {
case I915_PMU_ACTUAL_FREQUENCY:
val =
div_u64(pmu->sample[__I915_SAMPLE_FREQ_ACT].cur,
div_u64(read_sample(pmu, gt_id,
__I915_SAMPLE_FREQ_ACT),
USEC_PER_SEC /* to MHz */);
break;
case I915_PMU_REQUESTED_FREQUENCY:
val =
div_u64(pmu->sample[__I915_SAMPLE_FREQ_REQ].cur,
div_u64(read_sample(pmu, gt_id,
__I915_SAMPLE_FREQ_REQ),
USEC_PER_SEC /* to MHz */);
break;
case I915_PMU_INTERRUPTS:
val = READ_ONCE(pmu->irq_count);
break;
case I915_PMU_RC6_RESIDENCY:
val = get_rc6(to_gt(i915));
val = get_rc6(i915->gt[gt_id]);
break;
case I915_PMU_SOFTWARE_GT_AWAKE_TIME:
val = ktime_to_ns(intel_gt_get_awake_time(to_gt(i915)));
@ -633,11 +726,10 @@ static void i915_pmu_enable(struct perf_event *event)
{
struct drm_i915_private *i915 =
container_of(event->pmu, typeof(*i915), pmu.base);
const unsigned int bit = event_bit(event);
struct i915_pmu *pmu = &i915->pmu;
unsigned long flags;
unsigned int bit;
bit = event_bit(event);
if (bit == -1)
goto update;
@ -651,7 +743,7 @@ static void i915_pmu_enable(struct perf_event *event)
GEM_BUG_ON(bit >= ARRAY_SIZE(pmu->enable_count));
GEM_BUG_ON(pmu->enable_count[bit] == ~0);
pmu->enable |= BIT_ULL(bit);
pmu->enable |= BIT(bit);
pmu->enable_count[bit]++;
/*
@ -698,7 +790,7 @@ static void i915_pmu_disable(struct perf_event *event)
{
struct drm_i915_private *i915 =
container_of(event->pmu, typeof(*i915), pmu.base);
unsigned int bit = event_bit(event);
const unsigned int bit = event_bit(event);
struct i915_pmu *pmu = &i915->pmu;
unsigned long flags;
@ -734,7 +826,7 @@ static void i915_pmu_disable(struct perf_event *event)
* bitmask when the last listener on an event goes away.
*/
if (--pmu->enable_count[bit] == 0) {
pmu->enable &= ~BIT_ULL(bit);
pmu->enable &= ~BIT(bit);
pmu->timer_enabled &= pmu_needs_timer(pmu, true);
}
@ -848,11 +940,20 @@ static const struct attribute_group i915_pmu_cpumask_attr_group = {
.attrs = i915_cpumask_attrs,
};
#define __event(__config, __name, __unit) \
#define __event(__counter, __name, __unit) \
{ \
.config = (__config), \
.counter = (__counter), \
.name = (__name), \
.unit = (__unit), \
.global = false, \
}
#define __global_event(__counter, __name, __unit) \
{ \
.counter = (__counter), \
.name = (__name), \
.unit = (__unit), \
.global = true, \
}
#define __engine_event(__sample, __name) \
@ -891,15 +992,16 @@ create_event_attributes(struct i915_pmu *pmu)
{
struct drm_i915_private *i915 = container_of(pmu, typeof(*i915), pmu);
static const struct {
u64 config;
unsigned int counter;
const char *name;
const char *unit;
bool global;
} events[] = {
__event(I915_PMU_ACTUAL_FREQUENCY, "actual-frequency", "M"),
__event(I915_PMU_REQUESTED_FREQUENCY, "requested-frequency", "M"),
__event(I915_PMU_INTERRUPTS, "interrupts", NULL),
__event(I915_PMU_RC6_RESIDENCY, "rc6-residency", "ns"),
__event(I915_PMU_SOFTWARE_GT_AWAKE_TIME, "software-gt-awake-time", "ns"),
__event(0, "actual-frequency", "M"),
__event(1, "requested-frequency", "M"),
__global_event(2, "interrupts", NULL),
__event(3, "rc6-residency", "ns"),
__event(4, "software-gt-awake-time", "ns"),
};
static const struct {
enum drm_i915_pmu_engine_sample sample;
@ -914,12 +1016,17 @@ create_event_attributes(struct i915_pmu *pmu)
struct i915_ext_attribute *i915_attr = NULL, *i915_iter;
struct attribute **attr = NULL, **attr_iter;
struct intel_engine_cs *engine;
unsigned int i;
struct intel_gt *gt;
unsigned int i, j;
/* Count how many counters we will be exposing. */
for (i = 0; i < ARRAY_SIZE(events); i++) {
if (!config_status(i915, events[i].config))
count++;
for_each_gt(gt, i915, j) {
for (i = 0; i < ARRAY_SIZE(events); i++) {
u64 config = ___I915_PMU_OTHER(j, events[i].counter);
if (!config_status(i915, config))
count++;
}
}
for_each_uabi_engine(engine, i915) {
@ -949,26 +1056,39 @@ create_event_attributes(struct i915_pmu *pmu)
attr_iter = attr;
/* Initialize supported non-engine counters. */
for (i = 0; i < ARRAY_SIZE(events); i++) {
char *str;
for_each_gt(gt, i915, j) {
for (i = 0; i < ARRAY_SIZE(events); i++) {
u64 config = ___I915_PMU_OTHER(j, events[i].counter);
char *str;
if (config_status(i915, events[i].config))
continue;
if (config_status(i915, config))
continue;
str = kstrdup(events[i].name, GFP_KERNEL);
if (!str)
goto err;
*attr_iter++ = &i915_iter->attr.attr;
i915_iter = add_i915_attr(i915_iter, str, events[i].config);
if (events[i].unit) {
str = kasprintf(GFP_KERNEL, "%s.unit", events[i].name);
if (events[i].global || !HAS_EXTRA_GT_LIST(i915))
str = kstrdup(events[i].name, GFP_KERNEL);
else
str = kasprintf(GFP_KERNEL, "%s-gt%u",
events[i].name, j);
if (!str)
goto err;
*attr_iter++ = &pmu_iter->attr.attr;
pmu_iter = add_pmu_attr(pmu_iter, str, events[i].unit);
*attr_iter++ = &i915_iter->attr.attr;
i915_iter = add_i915_attr(i915_iter, str, config);
if (events[i].unit) {
if (events[i].global || !HAS_EXTRA_GT_LIST(i915))
str = kasprintf(GFP_KERNEL, "%s.unit",
events[i].name);
else
str = kasprintf(GFP_KERNEL, "%s-gt%u.unit",
events[i].name, j);
if (!str)
goto err;
*attr_iter++ = &pmu_iter->attr.attr;
pmu_iter = add_pmu_attr(pmu_iter, str,
events[i].unit);
}
}
}

View File

@ -13,8 +13,9 @@
#include <uapi/drm/i915_drm.h>
struct drm_i915_private;
struct intel_gt;
/**
/*
* Non-engine events that we need to track enabled-disabled transition and
* current state.
*/
@ -25,7 +26,7 @@ enum i915_pmu_tracked_events {
__I915_PMU_TRACKED_EVENT_COUNT, /* count marker */
};
/**
/*
* Slots used from the sampling timer (non-engine events) with some extras for
* convenience.
*/
@ -37,13 +38,16 @@ enum {
__I915_NUM_PMU_SAMPLERS
};
/**
#define I915_PMU_MAX_GTS 2
/*
* How many different events we track in the global PMU mask.
*
* It is also used to know to needed number of event reference counters.
*/
#define I915_PMU_MASK_BITS \
(I915_ENGINE_SAMPLE_COUNT + __I915_PMU_TRACKED_EVENT_COUNT)
(I915_ENGINE_SAMPLE_COUNT + \
I915_PMU_MAX_GTS * __I915_PMU_TRACKED_EVENT_COUNT)
#define I915_ENGINE_SAMPLE_COUNT (I915_SAMPLE_SEMA + 1)
@ -75,6 +79,10 @@ struct i915_pmu {
* @lock: Lock protecting enable mask and ref count handling.
*/
spinlock_t lock;
/**
* @unparked: GT unparked mask.
*/
unsigned int unparked;
/**
* @timer: Timer for internal i915 PMU sampling.
*/
@ -119,11 +127,11 @@ struct i915_pmu {
* Only global counters are held here, while the per-engine ones are in
* struct intel_engine_cs.
*/
struct i915_pmu_sample sample[__I915_NUM_PMU_SAMPLERS];
struct i915_pmu_sample sample[I915_PMU_MAX_GTS * __I915_NUM_PMU_SAMPLERS];
/**
* @sleep_last: Last time GT parked for RC6 estimation.
*/
ktime_t sleep_last;
ktime_t sleep_last[I915_PMU_MAX_GTS];
/**
* @irq_count: Number of interrupts
*
@ -151,15 +159,15 @@ int i915_pmu_init(void);
void i915_pmu_exit(void);
void i915_pmu_register(struct drm_i915_private *i915);
void i915_pmu_unregister(struct drm_i915_private *i915);
void i915_pmu_gt_parked(struct drm_i915_private *i915);
void i915_pmu_gt_unparked(struct drm_i915_private *i915);
void i915_pmu_gt_parked(struct intel_gt *gt);
void i915_pmu_gt_unparked(struct intel_gt *gt);
#else
static inline int i915_pmu_init(void) { return 0; }
static inline void i915_pmu_exit(void) {}
static inline void i915_pmu_register(struct drm_i915_private *i915) {}
static inline void i915_pmu_unregister(struct drm_i915_private *i915) {}
static inline void i915_pmu_gt_parked(struct drm_i915_private *i915) {}
static inline void i915_pmu_gt_unparked(struct drm_i915_private *i915) {}
static inline void i915_pmu_gt_parked(struct intel_gt *gt) {}
static inline void i915_pmu_gt_unparked(struct intel_gt *gt) {}
#endif
#endif

View File

@ -172,7 +172,7 @@ enum {
I915_FENCE_FLAG_COMPOSITE,
};
/**
/*
* Request queue structure.
*
* The request queue allows us to note sequence numbers that have been emitted
@ -198,7 +198,7 @@ struct i915_request {
struct drm_i915_private *i915;
/**
/*
* Context and ring buffer related to this request
* Contexts are refcounted, so when this request is associated with a
* context, we must increment the context's refcount, to guarantee that
@ -251,9 +251,9 @@ struct i915_request {
};
struct llist_head execute_cb;
struct i915_sw_fence semaphore;
/**
* @submit_work: complete submit fence from an IRQ if needed for
* locking hierarchy reasons.
/*
* complete submit fence from an IRQ if needed for locking hierarchy
* reasons.
*/
struct irq_work submit_work;
@ -277,35 +277,35 @@ struct i915_request {
*/
const u32 *hwsp_seqno;
/** Position in the ring of the start of the request */
/* Position in the ring of the start of the request */
u32 head;
/** Position in the ring of the start of the user packets */
/* Position in the ring of the start of the user packets */
u32 infix;
/**
/*
* Position in the ring of the start of the postfix.
* This is required to calculate the maximum available ring space
* without overwriting the postfix.
*/
u32 postfix;
/** Position in the ring of the end of the whole request */
/* Position in the ring of the end of the whole request */
u32 tail;
/** Position in the ring of the end of any workarounds after the tail */
/* Position in the ring of the end of any workarounds after the tail */
u32 wa_tail;
/** Preallocate space in the ring for the emitting the request */
/* Preallocate space in the ring for the emitting the request */
u32 reserved_space;
/** Batch buffer pointer for selftest internal use. */
/* Batch buffer pointer for selftest internal use. */
I915_SELFTEST_DECLARE(struct i915_vma *batch);
struct i915_vma_resource *batch_res;
#if IS_ENABLED(CONFIG_DRM_I915_CAPTURE_ERROR)
/**
/*
* Additional buffers requested by userspace to be captured upon
* a GPU hang. The vma/obj on this list are protected by their
* active reference - all objects on this list must also be
@ -314,29 +314,29 @@ struct i915_request {
struct i915_capture_list *capture_list;
#endif
/** Time at which this request was emitted, in jiffies. */
/* Time at which this request was emitted, in jiffies. */
unsigned long emitted_jiffies;
/** timeline->request entry for this request */
/* timeline->request entry for this request */
struct list_head link;
/** Watchdog support fields. */
/* Watchdog support fields. */
struct i915_request_watchdog {
struct llist_node link;
struct hrtimer timer;
} watchdog;
/**
* @guc_fence_link: Requests may need to be stalled when using GuC
* submission waiting for certain GuC operations to complete. If that is
* the case, stalled requests are added to a per context list of stalled
* requests. The below list_head is the link in that list. Protected by
/*
* Requests may need to be stalled when using GuC submission waiting for
* certain GuC operations to complete. If that is the case, stalled
* requests are added to a per context list of stalled requests. The
* below list_head is the link in that list. Protected by
* ce->guc_state.lock.
*/
struct list_head guc_fence_link;
/**
* @guc_prio: Priority level while the request is in flight. Differs
/*
* Priority level while the request is in flight. Differs
* from i915 scheduler priority. See comment above
* I915_SCHEDULER_CAP_STATIC_PRIORITY_MAP for details. Protected by
* ce->guc_active.lock. Two special values (GUC_PRIO_INIT and
@ -348,8 +348,8 @@ struct i915_request {
#define GUC_PRIO_FINI 0xfe
u8 guc_prio;
/**
* @hucq: wait queue entry used to wait on the HuC load to complete
/*
* wait queue entry used to wait on the HuC load to complete
*/
wait_queue_entry_t hucq;
@ -473,7 +473,7 @@ i915_request_has_initial_breadcrumb(const struct i915_request *rq)
return test_bit(I915_FENCE_FLAG_INITIAL_BREADCRUMB, &rq->fence.flags);
}
/**
/*
* Returns true if seq1 is later than seq2.
*/
static inline bool i915_seqno_passed(u32 seq1, u32 seq2)

View File

@ -157,8 +157,7 @@ bool i915_sg_trim(struct sg_table *orig_st);
*/
struct i915_refct_sgt_ops {
/**
* release() - Free the memory of the struct i915_refct_sgt
* @ref: struct kref that is embedded in the struct i915_refct_sgt
* @release: Free the memory of the struct i915_refct_sgt
*/
void (*release)(struct kref *ref);
};
@ -181,7 +180,7 @@ struct i915_refct_sgt {
/**
* i915_refct_sgt_put - Put a refcounted sg-table
* @rsgt the struct i915_refct_sgt to put.
* @rsgt: the struct i915_refct_sgt to put.
*/
static inline void i915_refct_sgt_put(struct i915_refct_sgt *rsgt)
{
@ -191,7 +190,7 @@ static inline void i915_refct_sgt_put(struct i915_refct_sgt *rsgt)
/**
* i915_refct_sgt_get - Get a refcounted sg-table
* @rsgt the struct i915_refct_sgt to get.
* @rsgt: the struct i915_refct_sgt to get.
*/
static inline struct i915_refct_sgt *
i915_refct_sgt_get(struct i915_refct_sgt *rsgt)
@ -203,7 +202,7 @@ i915_refct_sgt_get(struct i915_refct_sgt *rsgt)
/**
* __i915_refct_sgt_init - Initialize a refcounted sg-list with a custom
* operations structure
* @rsgt The struct i915_refct_sgt to initialize.
* @rsgt: The struct i915_refct_sgt to initialize.
* @size: Size in bytes of the underlying memory buffer.
* @ops: A customized operations structure in case the refcounted sg-list
* is embedded into another structure.

View File

@ -250,7 +250,7 @@ wait_remaining_ms_from_jiffies(unsigned long timestamp_jiffies, int to_wait_ms)
}
}
/**
/*
* __wait_for - magic wait macro
*
* Macro to help avoid open coding check/wait/timeout patterns. Note that it's

Some files were not shown because too many files have changed in this diff Show More