summaryrefslogtreecommitdiff
path: root/drivers/gpu/drm/nouveau/nouveau_exec.c
AgeCommit message (Collapse)Author
2025-07-15drm/sched: Rename DRM_GPU_SCHED_STAT_NOMINAL to DRM_GPU_SCHED_STAT_RESETMaíra Canal
Among the scheduler's statuses, the only one that indicates an error is DRM_GPU_SCHED_STAT_ENODEV. Any status other than DRM_GPU_SCHED_STAT_ENODEV signifies that the operation succeeded and the GPU is in a nominal state. However, to provide more information about the GPU's status, it is needed to convey more information than just "OK". Therefore, rename DRM_GPU_SCHED_STAT_NOMINAL to DRM_GPU_SCHED_STAT_RESET, which better communicates the meaning of this status. The status DRM_GPU_SCHED_STAT_RESET indicates that the GPU has hung, but it has been successfully reset and is now in a nominal state again. Reviewed-by: Philipp Stanner <phasta@kernel.org> Link: https://lore.kernel.org/r/20250714-sched-skip-reset-v6-1-5c5ba4f55039@igalia.com Signed-off-by: Maíra Canal <mcanal@igalia.com>
2025-05-19drm/nouveau/gf100-: track chan progress with non-WFI semaphore releaseBen Skeggs
From VOLTA_CHANNEL_GPFIFO_A onwards, HW no longer updates the GET/GP_GET pointers in USERD following channel progress, but instead updates on a timer for compatibility, and SW is expected to implement its own method of tracking channel progress (typically via non-WFI semaphore release). Nouveau has been making use of the compatibility mode up until now, however, from BLACKWELL_CHANNEL_GPFIFO_A HW no longer supports USERD writeback at all. Allocate a per-channel buffer in system memory, and append a non-WFI semaphore release to the end of each push buffer segment to simulate the pointers previously read from USERD. This change is implemented for Fermi (which is the first to support non- WFI semaphore release) onwards, as readback from system memory is likely faster than BAR1 reads. Signed-off-by: Ben Skeggs <bskeggs@nvidia.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Reviewed-by: Timur Tabi <ttabi@nvidia.com> Tested-by: Timur Tabi <ttabi@nvidia.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2025-05-19drm/nouveau/nv50-: separate CHANNEL_GPFIFO handling out from CHANNEL_DMABen Skeggs
Primarily a cleanup to allow for changes in newer CHANNEL_GPFIFO classes to be more easily implemented. Compared to the prior implementation, this submits userspace push buffer segments as subroutines and uses the NV_RAMUSERD_TOP_LEVEL_GET registers to track the main (kernel) push buffer progress. Fixes a number of sporadic failures seen during piglit runs. Signed-off-by: Ben Skeggs <bskeggs@nvidia.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Reviewed-by: Timur Tabi <ttabi@nvidia.com> Tested-by: Timur Tabi <ttabi@nvidia.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2024-06-17drm/nouveau: Constify struct nouveau_job_opsChristophe JAILLET
"struct nouveau_job_ops" is not modified in these drivers. Constifying this structure moves some data to a read-only section, so increase overall security. In order to do it, "struct nouveau_job" and "struct nouveau_job_args" also need to be adjusted to this new const qualifier. On a x86_64, with allmodconfig: Before: ====== text data bss dec hex filename 5570 152 0 5722 165a drivers/gpu/drm/nouveau/nouveau_exec.o After: ===== text data bss dec hex filename 5630 112 0 5742 166e drivers/gpu/drm/nouveau/nouveau_exec.o Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Danilo Krummrich <dakr@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/860e9753d7867aa46b003bb3d0497f1b04065b24.1718381285.git.christophe.jaillet@wanadoo.fr
2024-02-12drm/nouveau: don't fini scheduler if not initializedDanilo Krummrich
nouveau_abi16_ioctl_channel_alloc() and nouveau_cli_init() simply call their corresponding *_fini() counterpart. This can lead to nouveau_sched_fini() being called without struct nouveau_sched ever being initialized in the first place. Instead of embedding struct nouveau_sched into struct nouveau_cli and struct nouveau_chan_abi16, allocate struct nouveau_sched separately, such that we can check for the corresponding pointer to be NULL in the particular *_fini() functions. It makes sense to allocate struct nouveau_sched separately anyway, since in a subsequent commit we can also avoid to allocate a struct nouveau_sched in nouveau_abi16_ioctl_channel_alloc() at all, if the VM_BIND uAPI has been disabled due to the legacy uAPI being used. Fixes: 5f03a507b29e ("drm/nouveau: implement 1:1 scheduler - entity relationship") Reported-by: Timur Tabi <ttabi@nvidia.com> Tested-by: Timur Tabi <ttabi@nvidia.com> Closes: https://lore.kernel.org/nouveau/20240131213917.1545604-1-ttabi@nvidia.com/ Reviewed-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Danilo Krummrich <dakr@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240202000606.3526-1-dakr@redhat.com
2023-11-24drm/nouveau: enable dynamic job-flow controlDanilo Krummrich
Make use of the scheduler's credit limit and scheduler job's credit count to account for the actual size of a job, such that we fill up the ring efficiently. Signed-off-by: Danilo Krummrich <dakr@redhat.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231114002728.3491-2-dakr@redhat.com
2023-11-24drm/nouveau: implement 1:1 scheduler - entity relationshipDanilo Krummrich
Recent patches to the DRM scheduler [1][2] allow for a variable number of run-queues and add support for (shared) workqueues rather than dedicated kthreads per scheduler. This allows us to create a 1:1 relationship between a GPU scheduler and a scheduler entity, in order to properly support firmware schedulers being able to handle an arbitrary amount of dynamically allocated command ring buffers. This perfectly matches Nouveau's needs, hence make use of it. Topology wise we create one scheduler instance per client (handling VM_BIND jobs) and one scheduler instance per channel (handling EXEC jobs). All channel scheduler instances share a workqueue, but every client scheduler instance has a dedicated workqueue. The latter is required to ensure that for VM_BIND job's free_job() work and run_job() work can always run concurrently and hence, free_job() work can never stall run_job() work. For EXEC jobs we don't have this requirement, since EXEC job's free_job() does not require to take any locks which indirectly or directly are held for allocations elsewhere. [1] https://lore.kernel.org/all/8f53f7ef-7621-4f0b-bdef-d8d20bc497ff@redhat.com/T/ [2] https://lore.kernel.org/all/20231031032439.1558703-1-matthew.brost@intel.com/T/ Signed-off-by: Danilo Krummrich <dakr@redhat.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231114002728.3491-1-dakr@redhat.com
2023-11-24drm/nouveau: use GPUVM common infrastructureDanilo Krummrich
GPUVM provides common infrastructure to track external and evicted GEM objects as well as locking and validation helpers. Especially external and evicted object tracking is a huge improvement compared to the current brute force approach of iterating all mappings in order to lock and validate the GPUVM's GEM objects. Hence, make us of it. Signed-off-by: Danilo Krummrich <dakr@redhat.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231113221202.7203-1-dakr@redhat.com
2023-10-23BackMerge tag 'v6.6-rc7' into drm-nextDave Airlie
This is needed to add the msm pr which is based on a higher base. Signed-off-by: Dave Airlie <airlied@redhat.com>
2023-10-04drm/nouveau: exec: report max pushs through getparamDanilo Krummrich
Report the maximum number of IBs that can be pushed with a single DRM_IOCTL_NOUVEAU_EXEC through DRM_IOCTL_NOUVEAU_GETPARAM. While the maximum number of IBs per ring might vary between chipsets, the kernel will make sure that userspace can only push a fraction of the maximum number of IBs per ring per job, such that we avoid a situation where there's only a single job occupying the ring, which could potentially lead to the ring run dry. Using DRM_IOCTL_NOUVEAU_GETPARAM to report the maximum number of IBs that can be pushed with a single DRM_IOCTL_NOUVEAU_EXEC implies that all channels of a given device have the same ring size. Reviewed-by: Dave Airlie <airlied@redhat.com> Reviewed-by: Lyude Paul <lyude@redhat.com> Acked-by: Faith Ekstrand <faith.ekstrand@collabora.com> Signed-off-by: Danilo Krummrich <dakr@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231002135008.10651-3-dakr@redhat.com
2023-09-29Merge tag 'drm-misc-next-2023-09-27' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-misc into drm-next drm-misc-next for v6.7-rc1: UAPI Changes: - drm_file owner is now updated during use, in the case of a drm fd opened by the display server for a client, the correct owner is displayed. - Qaic gains support for the QAIC_DETACH_SLICE_BO ioctl to allow bo recycling. Cross-subsystem Changes: - Disable boot logo for au1200fb, mmpfb and unexport logo helpers. Only fbcon should manage display of logo. - Update freescale in MAINTAINERS. - Add some bridge files to bridge in MAINTAINERS. - Update gma500 driver repo in MAINTAINERS to point to drm-misc. Core Changes: - Move size computations to drm buddy allocator. - Make drm_atomic_helper_shutdown(NULL) a nop. - Assorted small fixes in drm_debugfs, DP-MST payload addition error handling. - Fix DRM_BRIDGE_ATTACH_NO_CONNECTOR handling. - Handle bad (h/v)sync_end in EDID by clipping to htotal. - Build GPUVM as a module. Driver Changes: - Simple drivers don't need to cache prepared result. - Call drm_atomic_helper_shutdown() in shutdown/unbind for a whole lot more drm drivers. - Assorted small fixes in amdgpu, ssd130x, bridge/it6621, accel/qaic, nouveau, tc358768. - Add NV12 for komeda writeback. - Add arbitration lost event to synopsis/dw-hdmi-cec. - Speed up s/r in nouveau by not restoring some big bo's. - Assorted nouveau display rework in preparation for GSP-RM, especially related to how the modeset sequence works and the DP sequence in relation to link training. - Update anx7816 panel. - Support NVSYNC and NHSYNC in tegra. - Allow multiple power domains in simple driver. Signed-off-by: Dave Airlie <airlied@redhat.com> From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/f1fae5eb-25b8-192a-9a53-215e1184ce81@linux.intel.com
2023-09-26drm/nouveau: uvmm: rename 'umgr' to 'base'Danilo Krummrich
Rename struct drm_gpuvm within struct nouveau_uvmm from 'umgr' to base. Reviewed-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Danilo Krummrich <dakr@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230920144343.64830-4-dakr@redhat.com
2023-09-26drm/gpuvm: rename struct drm_gpuva_manager to struct drm_gpuvmDanilo Krummrich
Rename struct drm_gpuva_manager to struct drm_gpuvm including corresponding functions. This way the GPUVA manager's structures align very well with the documentation of VM_BIND [1] and VM_BIND locking [2]. It also provides a better foundation for the naming of data structures and functions introduced for implementing a common dma-resv per GPU-VM including tracking of external and evicted objects in subsequent patches. [1] Documentation/gpu/drm-vm-bind-async.rst [2] Documentation/gpu/drm-vm-bind-locking.rst Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Acked-by: Dave Airlie <airlied@redhat.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Danilo Krummrich <dakr@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230920144343.64830-2-dakr@redhat.com
2023-09-20drm/nouveau: sched: fix leaking memory of timedout jobDanilo Krummrich
Always stop and re-start the scheduler in order to let the scheduler free up the timedout job in case it got signaled. In case of exec jobs the job type specific callback will take care to signal all fences and tear down the channel. Fixes: b88baab82871 ("drm/nouveau: implement new VM_BIND uAPI") Signed-off-by: Danilo Krummrich <dakr@redhat.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230916162835.5719-1-dakr@redhat.com
2023-08-31drm/nouveau: fence: fix undefined fence state after emitDanilo Krummrich
nouveau_fence_emit() can fail before and after initializing the dma-fence and hence before and after initializing the dma-fence' kref. In order to avoid nouveau_fence_emit() potentially failing before dma-fence initialization pass the channel to nouveau_fence_new() already and perform the required check before even allocating the fence. While at it, restore the original behavior of nouveau_fence_new() and add nouveau_fence_create() for separate (pre-)allocation instead. Always splitting up allocation end emit wasn't a good idea in the first place. Hence, limit it to the places where we actually need to pre-allocate. Fixes: 7f2a0b50b2b2 ("drm/nouveau: fence: separate fence alloc and emit") Signed-off-by: Danilo Krummrich <dakr@redhat.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230829223847.4406-1-dakr@redhat.com
2023-08-24drm/nouveau: uapi: don't pass NO_PREFETCH flag implicitlyDanilo Krummrich
Currently, NO_PREFETCH is passed implicitly through drm_nouveau_gem_pushbuf_push::length and drm_nouveau_exec_push::va_len. Since this is a direct representation of how the HW is programmed it isn't really future proof for a uAPI. Hence, fix this up for the new uAPI and split up the va_len field of struct drm_nouveau_exec_push, such that we keep 32bit for va_len and 32bit for flags. For drm_nouveau_gem_pushbuf_push::length at least provide NOUVEAU_GEM_PUSHBUF_NO_PREFETCH to indicate the bit shift. While at it, fix up nv50_dma_push() as well, such that the caller doesn't need to encode the NO_PREFETCH flag into the length parameter. Signed-off-by: Danilo Krummrich <dakr@redhat.com> Reviewed-by: Faith Ekstrand <faith.ekstrand@collabora.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230823181746.3446-1-dakr@redhat.com
2023-08-24drm/nouveau: uapi: don't pass NO_PREFETCH flag implicitlyDanilo Krummrich
Currently, NO_PREFETCH is passed implicitly through drm_nouveau_gem_pushbuf_push::length and drm_nouveau_exec_push::va_len. Since this is a direct representation of how the HW is programmed it isn't really future proof for a uAPI. Hence, fix this up for the new uAPI and split up the va_len field of struct drm_nouveau_exec_push, such that we keep 32bit for va_len and 32bit for flags. For drm_nouveau_gem_pushbuf_push::length at least provide NOUVEAU_GEM_PUSHBUF_NO_PREFETCH to indicate the bit shift. While at it, fix up nv50_dma_push() as well, such that the caller doesn't need to encode the NO_PREFETCH flag into the length parameter. Signed-off-by: Danilo Krummrich <dakr@redhat.com> Reviewed-by: Faith Ekstrand <faith.ekstrand@collabora.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230823181746.3446-1-dakr@redhat.com
2023-08-08drm/nouveau: remove incorrect __user annotationsDanilo Krummrich
Fix copy-paste error causing EXEC and VM_BIND syscalls data pointers to carry incorrect __user annotations. Fixes: b88baab82871 ("drm/nouveau: implement new VM_BIND uAPI") Reported-by: kernel test robot <lkp@intel.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Danilo Krummrich <dakr@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230807163238.2091-4-dakr@redhat.com
2023-08-04drm/nouveau: implement new VM_BIND uAPIDanilo Krummrich
This commit provides the implementation for the new uapi motivated by the Vulkan API. It allows user mode drivers (UMDs) to: 1) Initialize a GPU virtual address (VA) space via the new DRM_IOCTL_NOUVEAU_VM_INIT ioctl for UMDs to specify the portion of VA space managed by the kernel and userspace, respectively. 2) Allocate and free a VA space region as well as bind and unbind memory to the GPUs VA space via the new DRM_IOCTL_NOUVEAU_VM_BIND ioctl. UMDs can request the named operations to be processed either synchronously or asynchronously. It supports DRM syncobjs (incl. timelines) as synchronization mechanism. The management of the GPU VA mappings is implemented with the DRM GPU VA manager. 3) Execute push buffers with the new DRM_IOCTL_NOUVEAU_EXEC ioctl. The execution happens asynchronously. It supports DRM syncobj (incl. timelines) as synchronization mechanism. DRM GEM object locking is handled with drm_exec. Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, use the DRM GPU scheduler for the asynchronous paths. Reviewed-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Danilo Krummrich <dakr@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230804182406.5222-12-dakr@redhat.com