summaryrefslogtreecommitdiff
path: root/Documentation/gpu
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/gpu')
-rw-r--r--Documentation/gpu/amdgpu/display/mpo-overview.rst2
-rw-r--r--Documentation/gpu/automated_testing.rst144
-rw-r--r--Documentation/gpu/drivers.rst1
-rw-r--r--Documentation/gpu/drm-kms-helpers.rst2
-rw-r--r--Documentation/gpu/drm-kms.rst6
-rw-r--r--Documentation/gpu/drm-mm.rst20
-rw-r--r--Documentation/gpu/drm-uapi.rst84
-rw-r--r--Documentation/gpu/drm-usage-stats.rst5
-rw-r--r--Documentation/gpu/i915.rst33
-rw-r--r--Documentation/gpu/index.rst1
-rw-r--r--Documentation/gpu/kms-properties.csv2
-rw-r--r--Documentation/gpu/komeda-kms.rst4
-rw-r--r--Documentation/gpu/msm-crash-dump.rst2
-rw-r--r--Documentation/gpu/panfrost.rst40
-rw-r--r--Documentation/gpu/rfc/i915_scheduler.rst2
-rw-r--r--Documentation/gpu/rfc/i915_vm_bind.rst2
-rw-r--r--Documentation/gpu/rfc/xe.rst89
-rw-r--r--Documentation/gpu/todo.rst8
18 files changed, 359 insertions, 88 deletions
diff --git a/Documentation/gpu/amdgpu/display/mpo-overview.rst b/Documentation/gpu/amdgpu/display/mpo-overview.rst
index 0499aa92d08d..59a4f54a3ac7 100644
--- a/Documentation/gpu/amdgpu/display/mpo-overview.rst
+++ b/Documentation/gpu/amdgpu/display/mpo-overview.rst
@@ -178,7 +178,7 @@ Multiple Display MPO
AMDGPU supports display MPO when using multiple displays; however, this feature
behavior heavily relies on the compositor implementation. Keep in mind that
-usespace can define different policies. For example, some OSes can use MPO to
+userspace can define different policies. For example, some OSes can use MPO to
protect the plane that handles the video playback; notice that we don't have
many limitations for a single display. Nonetheless, this manipulation can have
many more restrictions for a multi-display scenario. The below example shows a
diff --git a/Documentation/gpu/automated_testing.rst b/Documentation/gpu/automated_testing.rst
new file mode 100644
index 000000000000..469b6fb65c30
--- /dev/null
+++ b/Documentation/gpu/automated_testing.rst
@@ -0,0 +1,144 @@
+.. SPDX-License-Identifier: GPL-2.0+
+
+=========================================
+Automated testing of the DRM subsystem
+=========================================
+
+Introduction
+============
+
+Making sure that changes to the core or drivers don't introduce regressions can
+be very time-consuming when lots of different hardware configurations need to
+be tested. Moreover, it isn't practical for each person interested in this
+testing to have to acquire and maintain what can be a considerable amount of
+hardware.
+
+Also, it is desirable for developers to check for regressions in their code by
+themselves, instead of relying on the maintainers to find them and then
+reporting back.
+
+There are facilities in gitlab.freedesktop.org to automatically test Mesa that
+can be used as well for testing the DRM subsystem. This document explains how
+people interested in testing it can use this shared infrastructure to save
+quite some time and effort.
+
+
+Relevant files
+==============
+
+drivers/gpu/drm/ci/gitlab-ci.yml
+--------------------------------
+
+This is the root configuration file for GitLab CI. Among other less interesting
+bits, it specifies the specific version of the scripts to be used. There are
+some variables that can be modified to change the behavior of the pipeline:
+
+DRM_CI_PROJECT_PATH
+ Repository that contains the Mesa software infrastructure for CI
+
+DRM_CI_COMMIT_SHA
+ A particular revision to use from that repository
+
+UPSTREAM_REPO
+ URL to git repository containing the target branch
+
+TARGET_BRANCH
+ Branch to which this branch is to be merged into
+
+IGT_VERSION
+ Revision of igt-gpu-tools being used, from
+ https://gitlab.freedesktop.org/drm/igt-gpu-tools
+
+drivers/gpu/drm/ci/testlist.txt
+-------------------------------
+
+IGT tests to be run on all drivers (unless mentioned in a driver's \*-skips.txt
+file, see below).
+
+drivers/gpu/drm/ci/${DRIVER_NAME}-${HW_REVISION}-fails.txt
+----------------------------------------------------------
+
+Lists the known failures for a given driver on a specific hardware revision.
+
+drivers/gpu/drm/ci/${DRIVER_NAME}-${HW_REVISION}-flakes.txt
+-----------------------------------------------------------
+
+Lists the tests that for a given driver on a specific hardware revision are
+known to behave unreliably. These tests won't cause a job to fail regardless of
+the result. They will still be run.
+
+drivers/gpu/drm/ci/${DRIVER_NAME}-${HW_REVISION}-skips.txt
+-----------------------------------------------------------
+
+Lists the tests that won't be run for a given driver on a specific hardware
+revision. These are usually tests that interfere with the running of the test
+list due to hanging the machine, causing OOM, taking too long, etc.
+
+
+How to enable automated testing on your tree
+============================================
+
+1. Create a Linux tree in https://gitlab.freedesktop.org/ if you don't have one
+yet
+
+2. In your kernel repo's configuration (eg.
+https://gitlab.freedesktop.org/janedoe/linux/-/settings/ci_cd), change the
+CI/CD configuration file from .gitlab-ci.yml to
+drivers/gpu/drm/ci/gitlab-ci.yml.
+
+3. Next time you push to this repository, you will see a CI pipeline being
+created (eg. https://gitlab.freedesktop.org/janedoe/linux/-/pipelines)
+
+4. The various jobs will be run and when the pipeline is finished, all jobs
+should be green unless a regression has been found.
+
+
+How to update test expectations
+===============================
+
+If your changes to the code fix any tests, you will have to remove one or more
+lines from one or more of the files in
+drivers/gpu/drm/ci/${DRIVER_NAME}_*_fails.txt, for each of the test platforms
+affected by the change.
+
+
+How to expand coverage
+======================
+
+If your code changes make it possible to run more tests (by solving reliability
+issues, for example), you can remove tests from the flakes and/or skips lists,
+and then the expected results if there are any known failures.
+
+If there is a need for updating the version of IGT being used (maybe you have
+added more tests to it), update the IGT_VERSION variable at the top of the
+gitlab-ci.yml file.
+
+
+How to test your changes to the scripts
+=======================================
+
+For testing changes to the scripts in the drm-ci repo, change the
+DRM_CI_PROJECT_PATH and DRM_CI_COMMIT_SHA variables in
+drivers/gpu/drm/ci/gitlab-ci.yml to match your fork of the project (eg.
+janedoe/drm-ci). This fork needs to be in https://gitlab.freedesktop.org/.
+
+
+How to incorporate external fixes in your testing
+=================================================
+
+Often, regressions in other trees will prevent testing changes local to the
+tree under test. These fixes will be automatically merged in during the build
+jobs from a branch in the target tree that is named as
+${TARGET_BRANCH}-external-fixes.
+
+If the pipeline is not in a merge request and a branch with the same name
+exists in the local tree, commits from that branch will be merged in as well.
+
+
+How to deal with automated testing labs that may be down
+========================================================
+
+If a hardware farm is down and thus causing pipelines to fail that would
+otherwise pass, one can disable all jobs that would be submitted to that farm
+by editing the file at
+https://gitlab.freedesktop.org/gfx-ci/lab-status/-/blob/main/lab-status.yml.
diff --git a/Documentation/gpu/drivers.rst b/Documentation/gpu/drivers.rst
index 3a52f48215a3..45a12e552091 100644
--- a/Documentation/gpu/drivers.rst
+++ b/Documentation/gpu/drivers.rst
@@ -18,6 +18,7 @@ GPU Driver Documentation
xen-front
afbc
komeda-kms
+ panfrost
.. only:: subproject and html
diff --git a/Documentation/gpu/drm-kms-helpers.rst b/Documentation/gpu/drm-kms-helpers.rst
index b8ab05e42dbb..b748b8ae70b2 100644
--- a/Documentation/gpu/drm-kms-helpers.rst
+++ b/Documentation/gpu/drm-kms-helpers.rst
@@ -378,7 +378,7 @@ SCDC Helper Functions Reference
HDMI Infoframes Helper Reference
================================
-Strictly speaking this is not a DRM helper library but generally useable
+Strictly speaking this is not a DRM helper library but generally usable
by any driver interfacing with HDMI outputs like v4l or alsa drivers.
But it nicely fits into the overall topic of mode setting helper
libraries and hence is also included here.
diff --git a/Documentation/gpu/drm-kms.rst b/Documentation/gpu/drm-kms.rst
index c92d425cb2dd..a0c83fc48126 100644
--- a/Documentation/gpu/drm-kms.rst
+++ b/Documentation/gpu/drm-kms.rst
@@ -66,11 +66,11 @@ Composition Properties`_ and related chapters.
For the output routing the first step is encoders (represented by
:c:type:`struct drm_encoder <drm_encoder>`, see `Encoder Abstraction`_). Those
are really just internal artifacts of the helper libraries used to implement KMS
-drivers. Besides that they make it unecessarily more complicated for userspace
+drivers. Besides that they make it unnecessarily more complicated for userspace
to figure out which connections between a CRTC and a connector are possible, and
what kind of cloning is supported, they serve no purpose in the userspace API.
Unfortunately encoders have been exposed to userspace, hence can't remove them
-at this point. Futhermore the exposed restrictions are often wrongly set by
+at this point. Furthermore the exposed restrictions are often wrongly set by
drivers, and in many cases not powerful enough to express the real restrictions.
A CRTC can be connected to multiple encoders, and for an active CRTC there must
be at least one encoder.
@@ -260,7 +260,7 @@ Taken all together there's two consequences for the atomic design:
drm_crtc_state <drm_crtc_state>` for CRTCs and :c:type:`struct
drm_connector_state <drm_connector_state>` for connectors. These are the only
objects with userspace-visible and settable state. For internal state drivers
- can subclass these structures through embeddeding, or add entirely new state
+ can subclass these structures through embedding, or add entirely new state
structures for their globally shared hardware functions, see :c:type:`struct
drm_private_state<drm_private_state>`.
diff --git a/Documentation/gpu/drm-mm.rst b/Documentation/gpu/drm-mm.rst
index c19b34b1c0ed..602010cb6894 100644
--- a/Documentation/gpu/drm-mm.rst
+++ b/Documentation/gpu/drm-mm.rst
@@ -466,40 +466,40 @@ DRM MM Range Allocator Function References
.. kernel-doc:: drivers/gpu/drm/drm_mm.c
:export:
-DRM GPU VA Manager
-==================
+DRM GPUVM
+=========
Overview
--------
-.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Overview
Split and Merge
---------------
-.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Split and Merge
Locking
-------
-.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Locking
Examples
--------
-.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:doc: Examples
-DRM GPU VA Manager Function References
---------------------------------------
+DRM GPUVM Function References
+-----------------------------
-.. kernel-doc:: include/drm/drm_gpuva_mgr.h
+.. kernel-doc:: include/drm/drm_gpuvm.h
:internal:
-.. kernel-doc:: drivers/gpu/drm/drm_gpuva_mgr.c
+.. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
:export:
DRM Buddy Allocator
diff --git a/Documentation/gpu/drm-uapi.rst b/Documentation/gpu/drm-uapi.rst
index 65fb3036a580..632989df3727 100644
--- a/Documentation/gpu/drm-uapi.rst
+++ b/Documentation/gpu/drm-uapi.rst
@@ -285,6 +285,83 @@ for GPU1 and GPU2 from different vendors, and a third handler for
mmapped regular files. Threads cause additional pain with signal
handling as well.
+Device reset
+============
+
+The GPU stack is really complex and is prone to errors, from hardware bugs,
+faulty applications and everything in between the many layers. Some errors
+require resetting the device in order to make the device usable again. This
+section describes the expectations for DRM and usermode drivers when a
+device resets and how to propagate the reset status.
+
+Device resets can not be disabled without tainting the kernel, which can lead to
+hanging the entire kernel through shrinkers/mmu_notifiers. Userspace role in
+device resets is to propagate the message to the application and apply any
+special policy for blocking guilty applications, if any. Corollary is that
+debugging a hung GPU context require hardware support to be able to preempt such
+a GPU context while it's stopped.
+
+Kernel Mode Driver
+------------------
+
+The KMD is responsible for checking if the device needs a reset, and to perform
+it as needed. Usually a hang is detected when a job gets stuck executing. KMD
+should keep track of resets, because userspace can query any time about the
+reset status for a specific context. This is needed to propagate to the rest of
+the stack that a reset has happened. Currently, this is implemented by each
+driver separately, with no common DRM interface. Ideally this should be properly
+integrated at DRM scheduler to provide a common ground for all drivers. After a
+reset, KMD should reject new command submissions for affected contexts.
+
+User Mode Driver
+----------------
+
+After command submission, UMD should check if the submission was accepted or
+rejected. After a reset, KMD should reject submissions, and UMD can issue an
+ioctl to the KMD to check the reset status, and this can be checked more often
+if the UMD requires it. After detecting a reset, UMD will then proceed to report
+it to the application using the appropriate API error code, as explained in the
+section below about robustness.
+
+Robustness
+----------
+
+The only way to try to keep a graphical API context working after a reset is if
+it complies with the robustness aspects of the graphical API that it is using.
+
+Graphical APIs provide ways to applications to deal with device resets. However,
+there is no guarantee that the app will use such features correctly, and a
+userspace that doesn't support robust interfaces (like a non-robust
+OpenGL context or API without any robustness support like libva) leave the
+robustness handling entirely to the userspace driver. There is no strong
+community consensus on what the userspace driver should do in that case,
+since all reasonable approaches have some clear downsides.
+
+OpenGL
+~~~~~~
+
+Apps using OpenGL should use the available robust interfaces, like the
+extension ``GL_ARB_robustness`` (or ``GL_EXT_robustness`` for OpenGL ES). This
+interface tells if a reset has happened, and if so, all the context state is
+considered lost and the app proceeds by creating new ones. There's no consensus
+on what to do to if robustness is not in use.
+
+Vulkan
+~~~~~~
+
+Apps using Vulkan should check for ``VK_ERROR_DEVICE_LOST`` for submissions.
+This error code means, among other things, that a device reset has happened and
+it needs to recreate the contexts to keep going.
+
+Reporting causes of resets
+--------------------------
+
+Apart from propagating the reset through the stack so apps can recover, it's
+really useful for driver developers to learn more about what caused the reset in
+the first place. DRM devices should make use of devcoredump to store relevant
+information about the reset, so this information can be added to user bug
+reports.
+
.. _drm_driver_ioctl:
IOCTL Support on Device Nodes
@@ -486,3 +563,10 @@ and the CRTC index is its position in this array.
.. kernel-doc:: include/uapi/drm/drm_mode.h
:internal:
+
+
+dma-buf interoperability
+========================
+
+Please see Documentation/userspace-api/dma-buf-alloc-exchange.rst for
+information on how dma-buf is integrated and exposed within DRM.
diff --git a/Documentation/gpu/drm-usage-stats.rst b/Documentation/gpu/drm-usage-stats.rst
index fe35a291ff3e..7aca5c7a7b1d 100644
--- a/Documentation/gpu/drm-usage-stats.rst
+++ b/Documentation/gpu/drm-usage-stats.rst
@@ -8,7 +8,7 @@ DRM drivers can choose to export partly standardised text output via the
`fops->show_fdinfo()` as part of the driver specific file operations registered
in the `struct drm_driver` object registered with the DRM core.
-One purpose of this output is to enable writing as generic as practicaly
+One purpose of this output is to enable writing as generic as practically
feasible `top(1)` like userspace monitoring tools.
Given the differences between various DRM drivers the specification of the
@@ -119,7 +119,7 @@ drm-engine-<keystr> tag and shall contain the maximum frequency for the given
engine. Taken together with drm-cycles-<keystr>, this can be used to calculate
percentage utilization of the engine, whereas drm-engine-<keystr> only reflects
time active without considering what frequency the engine is operating as a
-percentage of it's maximum frequency.
+percentage of its maximum frequency.
Memory
^^^^^^
@@ -169,3 +169,4 @@ Driver specific implementations
-------------------------------
:ref:`i915-usage-stats`
+:ref:`panfrost-usage-stats`
diff --git a/Documentation/gpu/i915.rst b/Documentation/gpu/i915.rst
index 60ea21734902..0ca1550fd9dc 100644
--- a/Documentation/gpu/i915.rst
+++ b/Documentation/gpu/i915.rst
@@ -267,19 +267,22 @@ i915 driver.
Intel GPU Basics
----------------
-An Intel GPU has multiple engines. There are several engine types.
-
-- RCS engine is for rendering 3D and performing compute, this is named
- `I915_EXEC_RENDER` in user space.
-- BCS is a blitting (copy) engine, this is named `I915_EXEC_BLT` in user
- space.
-- VCS is a video encode and decode engine, this is named `I915_EXEC_BSD`
- in user space
-- VECS is video enhancement engine, this is named `I915_EXEC_VEBOX` in user
- space.
-- The enumeration `I915_EXEC_DEFAULT` does not refer to specific engine;
- instead it is to be used by user space to specify a default rendering
- engine (for 3D) that may or may not be the same as RCS.
+An Intel GPU has multiple engines. There are several engine types:
+
+- Render Command Streamer (RCS). An engine for rendering 3D and
+ performing compute.
+- Blitting Command Streamer (BCS). An engine for performing blitting and/or
+ copying operations.
+- Video Command Streamer. An engine used for video encoding and decoding. Also
+ sometimes called 'BSD' in hardware documentation.
+- Video Enhancement Command Streamer (VECS). An engine for video enhancement.
+ Also sometimes called 'VEBOX' in hardware documentation.
+- Compute Command Streamer (CCS). An engine that has access to the media and
+ GPGPU pipelines, but not the 3D pipeline.
+- Graphics Security Controller (GSCCS). A dedicated engine for internal
+ communication with GSC controller on security related tasks like
+ High-bandwidth Digital Content Protection (HDCP), Protected Xe Path (PXP),
+ and HuC firmware authentication.
The Intel GPU family is a family of integrated GPU's using Unified
Memory Access. For having the GPU "do work", user space will feed the
@@ -304,10 +307,10 @@ reads of following commands. Actions issued between different contexts
and the only way to synchronize across contexts (even from the same
file descriptor) is through the use of fences. At least as far back as
Gen4, also have that a context carries with it a GPU HW context;
-the HW context is essentially (most of atleast) the state of a GPU.
+the HW context is essentially (most of at least) the state of a GPU.
In addition to the ordering guarantees, the kernel will restore GPU
state via HW context when commands are issued to a context, this saves
-user space the need to restore (most of atleast) the GPU state at the
+user space the need to restore (most of at least) the GPU state at the
start of each batchbuffer. The non-deprecated ioctls to submit batchbuffer
work can pass that ID (in the lower bits of drm_i915_gem_execbuffer2::rsvd1)
to identify what context to use with the command.
diff --git a/Documentation/gpu/index.rst b/Documentation/gpu/index.rst
index eee5996acf2c..e45ff0915246 100644
--- a/Documentation/gpu/index.rst
+++ b/Documentation/gpu/index.rst
@@ -17,6 +17,7 @@ GPU Driver Developer's Guide
backlight
vga-switcheroo
vgaarbiter
+ automated_testing
todo
rfc/index
diff --git a/Documentation/gpu/kms-properties.csv b/Documentation/gpu/kms-properties.csv
index 07ed22ea3bd6..0f9590834829 100644
--- a/Documentation/gpu/kms-properties.csv
+++ b/Documentation/gpu/kms-properties.csv
@@ -17,7 +17,7 @@ Owner Module/Drivers,Group,Property Name,Type,Property Values,Object attached,De
,Virtual GPU,“suggested X”,RANGE,"Min=0, Max=0xffffffff",Connector,property to suggest an X offset for a connector
,,“suggested Y”,RANGE,"Min=0, Max=0xffffffff",Connector,property to suggest an Y offset for a connector
,Optional,"""aspect ratio""",ENUM,"{ ""None"", ""4:3"", ""16:9"" }",Connector,TDB
-i915,Generic,"""Broadcast RGB""",ENUM,"{ ""Automatic"", ""Full"", ""Limited 16:235"" }",Connector,"When this property is set to Limited 16:235 and CTM is set, the hardware will be programmed with the result of the multiplication of CTM by the limited range matrix to ensure the pixels normaly in the range 0..1.0 are remapped to the range 16/255..235/255."
+i915,Generic,"""Broadcast RGB""",ENUM,"{ ""Automatic"", ""Full"", ""Limited 16:235"" }",Connector,"When this property is set to Limited 16:235 and CTM is set, the hardware will be programmed with the result of the multiplication of CTM by the limited range matrix to ensure the pixels normally in the range 0..1.0 are remapped to the range 16/255..235/255."
,,“audio”,ENUM,"{ ""force-dvi"", ""off"", ""auto"", ""on"" }",Connector,TBD
,SDVO-TV,“mode”,ENUM,"{ ""NTSC_M"", ""NTSC_J"", ""NTSC_443"", ""PAL_B"" } etc.",Connector,TBD
,,"""left_margin""",RANGE,"Min=0, Max= SDVO dependent",Connector,TBD
diff --git a/Documentation/gpu/komeda-kms.rst b/Documentation/gpu/komeda-kms.rst
index eb693c857e2d..633a016563ae 100644
--- a/Documentation/gpu/komeda-kms.rst
+++ b/Documentation/gpu/komeda-kms.rst
@@ -328,7 +328,7 @@ of course we’d better share as much as possible between different products. To
achieve this, split the komeda device into two layers: CORE and CHIP.
- CORE: for common features and capabilities handling.
-- CHIP: for register programing and HW specific feature (limitation) handling.
+- CHIP: for register programming and HW specific feature (limitation) handling.
CORE can access CHIP by three chip function structures:
@@ -481,7 +481,7 @@ Build komeda to be a Linux module driver
Now we have two level devices:
- komeda_dev: describes the real display hardware.
-- komeda_kms_dev: attachs or connects komeda_dev to DRM-KMS.
+- komeda_kms_dev: attaches or connects komeda_dev to DRM-KMS.
All komeda operations are supplied or operated by komeda_dev or komeda_kms_dev,
the module driver is only a simple wrapper to pass the Linux command
diff --git a/Documentation/gpu/msm-crash-dump.rst b/Documentation/gpu/msm-crash-dump.rst
index 240ef200f76c..9509cc4224f4 100644
--- a/Documentation/gpu/msm-crash-dump.rst
+++ b/Documentation/gpu/msm-crash-dump.rst
@@ -23,7 +23,7 @@ module
The module that generated the crashdump.
time
- The kernel time at crash formated as seconds.microseconds.
+ The kernel time at crash formatted as seconds.microseconds.
comm
Comm string for the binary that generated the fault.
diff --git a/Documentation/gpu/panfrost.rst b/Documentation/gpu/panfrost.rst
new file mode 100644
index 000000000000..b80e41f4b2c5
--- /dev/null
+++ b/Documentation/gpu/panfrost.rst
@@ -0,0 +1,40 @@
+.. SPDX-License-Identifier: GPL-2.0+
+
+=========================
+ drm/Panfrost Mali Driver
+=========================
+
+.. _panfrost-usage-stats:
+
+Panfrost DRM client usage stats implementation
+==============================================
+
+The drm/Panfrost driver implements the DRM client usage stats specification as
+documented in :ref:`drm-client-usage-stats`.
+
+Example of the output showing the implemented key value pairs and entirety of
+the currently possible format options:
+
+::
+ pos: 0
+ flags: 02400002
+ mnt_id: 27
+ ino: 531
+ drm-driver: panfrost
+ drm-client-id: 14
+ drm-engine-fragment: 1846584880 ns
+ drm-cycles-fragment: 1424359409
+ drm-maxfreq-fragment: 799999987 Hz
+ drm-curfreq-fragment: 799999987 Hz
+ drm-engine-vertex-tiler: 71932239 ns
+ drm-cycles-vertex-tiler: 52617357
+ drm-maxfreq-vertex-tiler: 799999987 Hz
+ drm-curfreq-vertex-tiler: 799999987 Hz
+ drm-total-memory: 290 MiB
+ drm-shared-memory: 0 MiB
+ drm-active-memory: 226 MiB
+ drm-resident-memory: 36496 KiB
+ drm-purgeable-memory: 128 KiB
+
+Possible `drm-engine-` key names are: `fragment`, and `vertex-tiler`.
+`drm-curfreq-` values convey the current operating frequency for that engine.
diff --git a/Documentation/gpu/rfc/i915_scheduler.rst b/Documentation/gpu/rfc/i915_scheduler.rst
index ec086e7a43ff..c237ebc024cd 100644
--- a/Documentation/gpu/rfc/i915_scheduler.rst
+++ b/Documentation/gpu/rfc/i915_scheduler.rst
@@ -37,7 +37,7 @@ i915 with the DRM scheduler is:
* Watchdog hooks into DRM scheduler
* Lots of complexity of the GuC backend can be pulled out once
integrated with DRM scheduler (e.g. state machine gets
- simplier, locking gets simplier, etc...)
+ simpler, locking gets simpler, etc...)
* Execlists backend will minimum required to hook in the DRM scheduler
* Legacy interface
* Features like timeslicing / preemption / virtual engines would
diff --git a/Documentation/gpu/rfc/i915_vm_bind.rst b/Documentation/gpu/rfc/i915_vm_bind.rst
index 9a1dcdf2799e..0b3b525ac620 100644
--- a/Documentation/gpu/rfc/i915_vm_bind.rst
+++ b/Documentation/gpu/rfc/i915_vm_bind.rst
@@ -90,7 +90,7 @@ submission, they need only one dma-resv fence list updated. Thus, the fast
path (where required mappings are already bound) submission latency is O(1)
w.r.t the number of VM private BOs.
-VM_BIND locking hirarchy
+VM_BIND locking hierarchy
-------------------------
The locking design here supports the older (execlist based) execbuf mode, the
newer VM_BIND mode, the VM_BIND mode with GPU page faults and possible future
diff --git a/Documentation/gpu/rfc/xe.rst b/Documentation/gpu/rfc/xe.rst
index 2516fe141db6..b67f8e6a1825 100644
--- a/Documentation/gpu/rfc/xe.rst
+++ b/Documentation/gpu/rfc/xe.rst
@@ -67,14 +67,8 @@ platforms.
When the time comes for Xe, the protection will be lifted on Xe and kept in i915.
-Xe driver will be protected with both STAGING Kconfig and force_probe. Changes in
-the uAPI are expected while the driver is behind these protections. STAGING will
-be removed when the driver uAPI gets to a mature state where we can guarantee the
-‘no regression’ rule. Then force_probe will be lifted only for future platforms
-that will be productized with Xe driver, but not with i915.
-
-Xe – Pre-Merge Goals
-====================
+Xe – Pre-Merge Goals - Work-in-Progress
+=======================================
Drm_scheduler
-------------
@@ -94,41 +88,6 @@ depend on any other patch touching drm_scheduler itself that was not yet merged
through drm-misc. This, by itself, already includes the reach of an agreement for
uniform 1 to 1 relationship implementation / usage across drivers.
-GPU VA
-------
-Two main goals of Xe are meeting together here:
-
-1) Have an uAPI that aligns with modern UMD needs.
-
-2) Early upstream engagement.
-
-RedHat engineers working on Nouveau proposed a new DRM feature to handle keeping
-track of GPU virtual address mappings. This is still not merged upstream, but
-this aligns very well with our goals and with our VM_BIND. The engagement with
-upstream and the port of Xe towards GPUVA is already ongoing.
-
-As a key measurable result, Xe needs to be aligned with the GPU VA and working in
-our tree. Missing Nouveau patches should *not* block Xe and any needed GPUVA
-related patch should be independent and present on dri-devel or acked by
-maintainers to go along with the first Xe pull request towards drm-next.
-
-DRM_VM_BIND
------------
-Nouveau, and Xe are all implementing ‘VM_BIND’ and new ‘Exec’ uAPIs in order to
-fulfill the needs of the modern uAPI. Xe merge should *not* be blocked on the
-development of a common new drm_infrastructure. However, the Xe team needs to
-engage with the community to explore the options of a common API.
-
-As a key measurable result, the DRM_VM_BIND needs to be documented in this file
-below, or this entire block deleted if the consensus is for independent drivers
-vm_bind ioctls.
-
-Although having a common DRM level IOCTL for VM_BIND is not a requirement to get
-Xe merged, it is mandatory to enforce the overall locking scheme for all major
-structs and list (so vm and vma). So, a consensus is needed, and possibly some
-common helpers. If helpers are needed, they should be also documented in this
-document.
-
ASYNC VM_BIND
-------------
Although having a common DRM level IOCTL for VM_BIND is not a requirement to get
@@ -212,6 +171,14 @@ This item ties into the GPUVA, VM_BIND, and even long-running compute support.
As a key measurable result, we need to have a community consensus documented in
this document and the Xe driver prepared for the changes, if necessary.
+Xe – uAPI high level overview
+=============================
+
+...Warning: To be done in follow up patches after/when/where the main consensus in various items are individually reached.
+
+Xe – Pre-Merge Goals - Completed
+================================
+
Dev_coredump
------------
@@ -229,7 +196,37 @@ infrastructure with overall possible improvements, like multiple file support
for better organization of the dumps, snapshot support, dmesg extra print,
and whatever may make sense and help the overall infrastructure.
-Xe – uAPI high level overview
-=============================
+DRM_VM_BIND
+-----------
+Nouveau, and Xe are all implementing ‘VM_BIND’ and new ‘Exec’ uAPIs in order to
+fulfill the needs of the modern uAPI. Xe merge should *not* be blocked on the
+development of a common new drm_infrastructure. However, the Xe team needs to
+engage with the community to explore the options of a common API.
-...Warning: To be done in follow up patches after/when/where the main consensus in various items are individually reached.
+As a key measurable result, the DRM_VM_BIND needs to be documented in this file
+below, or this entire block deleted if the consensus is for independent drivers
+vm_bind ioctls.
+
+Although having a common DRM level IOCTL for VM_BIND is not a requirement to get
+Xe merged, it is mandatory to enforce the overall locking scheme for all major
+structs and list (so vm and vma). So, a consensus is needed, and possibly some
+common helpers. If helpers are needed, they should be also documented in this
+document.
+
+GPU VA
+------
+Two main goals of Xe are meeting together here:
+
+1) Have an uAPI that aligns with modern UMD needs.
+
+2) Early upstream engagement.
+
+RedHat engineers working on Nouveau proposed a new DRM feature to handle keeping
+track of GPU virtual address mappings. This is still not merged upstream, but
+this aligns very well with our goals and with our VM_BIND. The engagement with
+upstream and the port of Xe towards GPUVA is already ongoing.
+
+As a key measurable result, Xe needs to be aligned with the GPU VA and working in
+our tree. Missing Nouveau patches should *not* block Xe and any needed GPUVA
+related patch should be independent and present on dri-devel or acked by
+maintainers to go along with the first Xe pull request towards drm-next.
diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 139980487ccf..03fe5d1247be 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -69,7 +69,7 @@ Clean up the clipped coordination confusion around planes
---------------------------------------------------------
We have a helper to get this right with drm_plane_helper_check_update(), but
-it's not consistently used. This should be fixed, preferrably in the atomic
+it's not consistently used. This should be fixed, preferably in the atomic
helpers (and drivers then moved over to clipped coordinates). Probably the
helper should also be moved from drm_plane_helper.c to the atomic helpers, to
avoid confusion - the other helpers in that file are all deprecated legacy
@@ -185,13 +185,13 @@ reversed.
To solve this we need one standard per-object locking mechanism, which is
dma_resv_lock(). This lock needs to be called as the outermost lock, with all
-other driver specific per-object locks removed. The problem is tha rolling out
+other driver specific per-object locks removed. The problem is that rolling out
the actual change to the locking contract is a flag day, due to struct dma_buf
buffer sharing.
Level: Expert
-Convert logging to drm_* functions with drm_device paramater
+Convert logging to drm_* functions with drm_device parameter
------------------------------------------------------------
For drivers which could have multiple instances, it is necessary to
@@ -248,7 +248,7 @@ Level: Advanced
Benchmark and optimize blitting and format-conversion function
--------------------------------------------------------------
-Drawing to dispay memory quickly is crucial for many applications'
+Drawing to display memory quickly is crucial for many applications'
performance.
On at least x86-64, sys_imageblit() is significantly slower than