summaryrefslogtreecommitdiff
path: root/drivers/gpu/drm/nouveau/nouveau_sched.h
AgeCommit message (Collapse)Author
2025-07-10drm/nouveau: Remove waitque for sched teardownPhilipp Stanner
struct nouveau_sched contains a waitque needed to prevent drm_sched_fini() from being called while there are still jobs pending. Doing so so far would have caused memory leaks. With the new memleak-free mode of operation switched on in drm_sched_fini() by providing the callback nouveau_sched_cancel_job() the waitque is not necessary anymore. Remove the waitque. Acked-by: Danilo Krummrich <dakr@kernel.org> Signed-off-by: Philipp Stanner <phasta@kernel.org> Link: https://lore.kernel.org/r/20250710125412.128476-10-phasta@kernel.org
2024-06-17drm/nouveau: Constify struct nouveau_job_opsChristophe JAILLET
"struct nouveau_job_ops" is not modified in these drivers. Constifying this structure moves some data to a read-only section, so increase overall security. In order to do it, "struct nouveau_job" and "struct nouveau_job_args" also need to be adjusted to this new const qualifier. On a x86_64, with allmodconfig: Before: ====== text data bss dec hex filename 5570 152 0 5722 165a drivers/gpu/drm/nouveau/nouveau_exec.o After: ===== text data bss dec hex filename 5630 112 0 5742 166e drivers/gpu/drm/nouveau/nouveau_exec.o Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Danilo Krummrich <dakr@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/860e9753d7867aa46b003bb3d0497f1b04065b24.1718381285.git.christophe.jaillet@wanadoo.fr
2024-02-12drm/nouveau: don't fini scheduler if not initializedDanilo Krummrich
nouveau_abi16_ioctl_channel_alloc() and nouveau_cli_init() simply call their corresponding *_fini() counterpart. This can lead to nouveau_sched_fini() being called without struct nouveau_sched ever being initialized in the first place. Instead of embedding struct nouveau_sched into struct nouveau_cli and struct nouveau_chan_abi16, allocate struct nouveau_sched separately, such that we can check for the corresponding pointer to be NULL in the particular *_fini() functions. It makes sense to allocate struct nouveau_sched separately anyway, since in a subsequent commit we can also avoid to allocate a struct nouveau_sched in nouveau_abi16_ioctl_channel_alloc() at all, if the VM_BIND uAPI has been disabled due to the legacy uAPI being used. Fixes: 5f03a507b29e ("drm/nouveau: implement 1:1 scheduler - entity relationship") Reported-by: Timur Tabi <ttabi@nvidia.com> Tested-by: Timur Tabi <ttabi@nvidia.com> Closes: https://lore.kernel.org/nouveau/20240131213917.1545604-1-ttabi@nvidia.com/ Reviewed-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Danilo Krummrich <dakr@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240202000606.3526-1-dakr@redhat.com
2023-11-24drm/nouveau: enable dynamic job-flow controlDanilo Krummrich
Make use of the scheduler's credit limit and scheduler job's credit count to account for the actual size of a job, such that we fill up the ring efficiently. Signed-off-by: Danilo Krummrich <dakr@redhat.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231114002728.3491-2-dakr@redhat.com
2023-11-24drm/nouveau: implement 1:1 scheduler - entity relationshipDanilo Krummrich
Recent patches to the DRM scheduler [1][2] allow for a variable number of run-queues and add support for (shared) workqueues rather than dedicated kthreads per scheduler. This allows us to create a 1:1 relationship between a GPU scheduler and a scheduler entity, in order to properly support firmware schedulers being able to handle an arbitrary amount of dynamically allocated command ring buffers. This perfectly matches Nouveau's needs, hence make use of it. Topology wise we create one scheduler instance per client (handling VM_BIND jobs) and one scheduler instance per channel (handling EXEC jobs). All channel scheduler instances share a workqueue, but every client scheduler instance has a dedicated workqueue. The latter is required to ensure that for VM_BIND job's free_job() work and run_job() work can always run concurrently and hence, free_job() work can never stall run_job() work. For EXEC jobs we don't have this requirement, since EXEC job's free_job() does not require to take any locks which indirectly or directly are held for allocations elsewhere. [1] https://lore.kernel.org/all/8f53f7ef-7621-4f0b-bdef-d8d20bc497ff@redhat.com/T/ [2] https://lore.kernel.org/all/20231031032439.1558703-1-matthew.brost@intel.com/T/ Signed-off-by: Danilo Krummrich <dakr@redhat.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231114002728.3491-1-dakr@redhat.com
2023-11-24drm/nouveau: use GPUVM common infrastructureDanilo Krummrich
GPUVM provides common infrastructure to track external and evicted GEM objects as well as locking and validation helpers. Especially external and evicted object tracking is a huge improvement compared to the current brute force approach of iterating all mappings in order to lock and validate the GPUVM's GEM objects. Hence, make us of it. Signed-off-by: Danilo Krummrich <dakr@redhat.com> Reviewed-by: Dave Airlie <airlied@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20231113221202.7203-1-dakr@redhat.com
2023-08-04drm/nouveau: implement new VM_BIND uAPIDanilo Krummrich
This commit provides the implementation for the new uapi motivated by the Vulkan API. It allows user mode drivers (UMDs) to: 1) Initialize a GPU virtual address (VA) space via the new DRM_IOCTL_NOUVEAU_VM_INIT ioctl for UMDs to specify the portion of VA space managed by the kernel and userspace, respectively. 2) Allocate and free a VA space region as well as bind and unbind memory to the GPUs VA space via the new DRM_IOCTL_NOUVEAU_VM_BIND ioctl. UMDs can request the named operations to be processed either synchronously or asynchronously. It supports DRM syncobjs (incl. timelines) as synchronization mechanism. The management of the GPU VA mappings is implemented with the DRM GPU VA manager. 3) Execute push buffers with the new DRM_IOCTL_NOUVEAU_EXEC ioctl. The execution happens asynchronously. It supports DRM syncobj (incl. timelines) as synchronization mechanism. DRM GEM object locking is handled with drm_exec. Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, use the DRM GPU scheduler for the asynchronous paths. Reviewed-by: Dave Airlie <airlied@redhat.com> Signed-off-by: Danilo Krummrich <dakr@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/20230804182406.5222-12-dakr@redhat.com