summaryrefslogtreecommitdiff
path: root/drivers/gpu/drm/i915/gem
AgeCommit message (Collapse)Author
2022-12-13drm/i915: stop using ttm_bo_waitChristian König
TTM is just wrapping core DMA functionality here, remove the mid-layer. No functional change. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221125102137.1801-7-christian.koenig@amd.com
2022-12-12Merge tag 'random-6.2-rc1-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/crng/random Pull random number generator updates from Jason Donenfeld: - Replace prandom_u32_max() and various open-coded variants of it, there is now a new family of functions that uses fast rejection sampling to choose properly uniformly random numbers within an interval: get_random_u32_below(ceil) - [0, ceil) get_random_u32_above(floor) - (floor, U32_MAX] get_random_u32_inclusive(floor, ceil) - [floor, ceil] Coccinelle was used to convert all current users of prandom_u32_max(), as well as many open-coded patterns, resulting in improvements throughout the tree. I'll have a "late" 6.1-rc1 pull for you that removes the now unused prandom_u32_max() function, just in case any other trees add a new use case of it that needs to converted. According to linux-next, there may be two trivial cases of prandom_u32_max() reintroductions that are fixable with a 's/.../.../'. So I'll have for you a final conversion patch doing that alongside the removal patch during the second week. This is a treewide change that touches many files throughout. - More consistent use of get_random_canary(). - Updates to comments, documentation, tests, headers, and simplification in configuration. - The arch_get_random*_early() abstraction was only used by arm64 and wasn't entirely useful, so this has been replaced by code that works in all relevant contexts. - The kernel will use and manage random seeds in non-volatile EFI variables, refreshing a variable with a fresh seed when the RNG is initialized. The RNG GUID namespace is then hidden from efivarfs to prevent accidental leakage. These changes are split into random.c infrastructure code used in the EFI subsystem, in this pull request, and related support inside of EFISTUB, in Ard's EFI tree. These are co-dependent for full functionality, but the order of merging doesn't matter. - Part of the infrastructure added for the EFI support is also used for an improvement to the way vsprintf initializes its siphash key, replacing an sleep loop wart. - The hardware RNG framework now always calls its correct random.c input function, add_hwgenerator_randomness(), rather than sometimes going through helpers better suited for other cases. - The add_latent_entropy() function has long been called from the fork handler, but is a no-op when the latent entropy gcc plugin isn't used, which is fine for the purposes of latent entropy. But it was missing out on the cycle counter that was also being mixed in beside the latent entropy variable. So now, if the latent entropy gcc plugin isn't enabled, add_latent_entropy() will expand to a call to add_device_randomness(NULL, 0), which adds a cycle counter, without the absent latent entropy variable. - The RNG is now reseeded from a delayed worker, rather than on demand when used. Always running from a worker allows it to make use of the CPU RNG on platforms like S390x, whose instructions are too slow to do so from interrupts. It also has the effect of adding in new inputs more frequently with more regularity, amounting to a long term transcript of random values. Plus, it helps a bit with the upcoming vDSO implementation (which isn't yet ready for 6.2). - The jitter entropy algorithm now tries to execute on many different CPUs, round-robining, in hopes of hitting even more memory latencies and other unpredictable effects. It also will mix in a cycle counter when the entropy timer fires, in addition to being mixed in from the main loop, to account more explicitly for fluctuations in that timer firing. And the state it touches is now kept within the same cache line, so that it's assured that the different execution contexts will cause latencies. * tag 'random-6.2-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: (23 commits) random: include <linux/once.h> in the right header random: align entropy_timer_state to cache line random: mix in cycle counter when jitter timer fires random: spread out jitter callback to different CPUs random: remove extraneous period and add a missing one in comments efi: random: refresh non-volatile random seed when RNG is initialized vsprintf: initialize siphash key using notifier random: add back async readiness notifier random: reseed in delayed work rather than on-demand random: always mix cycle counter in add_latent_entropy() hw_random: use add_hwgenerator_randomness() for early entropy random: modernize documentation comment on get_random_bytes() random: adjust comment to account for removed function random: remove early archrandom abstraction random: use random.trust_{bootloader,cpu} command line option only stackprotector: actually use get_random_canary() stackprotector: move get_random_canary() into stackprotector.h treewide: use get_random_u32_inclusive() when possible treewide: use get_random_u32_{above,below}() instead of manual loop treewide: use get_random_u32_below() instead of deprecated function ...
2022-12-09drm/i915/pxp: Promote pxp subsystem to top-level of i915Alan Previn
Starting with MTL, there will be two GT-tiles, a render and media tile. PXP as a service for supporting workloads with protected contexts and protected buffers can be subscribed by process workloads on any tile. However, depending on the platform, only one of the tiles is used for control events pertaining to PXP operation (such as creating the arbitration session and session tear-down). PXP as a global feature is accessible via batch buffer instructions on any engine/tile and the coherency across tiles is handled implicitly by the HW. In fact, for the foreseeable future, we are expecting this single-control-tile for the PXP subsystem. In MTL, it's the standalone media tile (not the root tile) because it contains the VDBOX and KCR engine (among the assets PXP relies on for those events). Looking at the current code design, each tile is represented by the intel_gt structure while the intel_pxp structure currently hangs off the intel_gt structure. Keeping the intel_pxp structure within the intel_gt structure makes some internal functionalities more straight forward but adds code complexity to code readability and maintainibility to many external-to-pxp subsystems which may need to pick the correct intel_gt structure. An example of this would be the intel_pxp_is_active or intel_pxp_is_enabled functionality which should be viewed as a global level inquiry, not a per-gt inquiry. That said, this series promotes the intel_pxp structure into the drm_i915_private structure making it a top-level subsystem and the PXP subsystem will select the control gt internally and keep a pointer to it for internal reference. This promotion comes with two noteworthy changes: 1. Exported pxp functions that are called by external subsystems (such as intel_pxp_enabled/active) will have to check implicitly if i915->pxp is valid as that structure will not be allocated for HW that doesn't support PXP. 2. Since GT is now considered a soft-dependency of PXP we are ensuring that GT init happens before PXP init and vice versa for fini. This causes a minor ordering change whereby we previously called intel_pxp_suspend after intel_uc_suspend but now is before i915_gem_suspend_late but the change is required for correct dependency flows. Additionally, this re-order change doesn't have any impact because at that point in either case, the top level entry to i915 won't observe any PXP events (since the GPU was quiesced during suspend_prepare). Also, any PXP event doesn't really matter when we disable the PXP HW (global GT irqs are already off anyway, so even if there was a bug that generated spurious events we wouldn't see it and we would just clean it up on resume which is okay since the default fallback action for PXP would be to keep the sessions off at this suspend stage). Changes from prior revs: v11: - Reformat a comment (Tvrtko). v10: - Change the code flow for intel_pxp_init to make it more cleaner and readible with better comments explaining the difference between full-PXP-feature vs the partial-teelink inits depending on the platform. Additionally, only do the pxp allocation when we are certain the subsystem is needed. (Tvrtko). v9: - Cosmetic cleanups in supported/enabled/active. (Daniele). - Add comments for intel_pxp_init and pxp_get_ctrl_gt that explain the functional flow for when PXP is not supported but the backend-assets are needed for HuC authentication (Daniele and Tvrtko). - Fix two remaining functions that are accessible outside PXP that need to be checking pxp ptrs before using them: intel_pxp_irq_handler and intel_pxp_huc_load_and_auth (Tvrtko and Daniele). - User helper macro in pxp-debugfs (Tvrtko). v8: - Remove pxp_to_gt macro (Daniele). - Fix a bug in pxp_get_ctrl_gt for the case of MTL and we don't support GSC-FW on it. (Daniele). - Leave i915->pxp as NULL if we dont support PXP and in line with that, do additional validity check on i915->pxp for intel_pxp_is_supported/enabled/active (Daniele). - Remove unncessary include header from intel_gt_debugfs.c and check drm_minor i915->drm.primary (Daniele). - Other cosmetics / minor issues / more comments on suspend flow order change (Daniele). v7: - Drop i915_dev_to_pxp and in intel_pxp_init use 'i915->pxp' through out instead of local variable newpxp. (Rodrigo) - In the case intel_pxp_fini is called during driver unload but after i915 loading failed without pxp being allocated, check i915->pxp before referencing it. (Alan) v6: - Remove HAS_PXP macro and replace it with intel_pxp_is_supported because : [1] introduction of 'ctrl_gt' means we correct this for MTL's upcoming series now. [2] Also, this has little impact globally as its only used by PXP-internal callers at the moment. - Change intel_pxp_init/fini to take in i915 as its input to avoid ptr-to-ptr in init/fini calls.(Jani). - Remove the backpointer from pxp->i915 since we can use pxp->ctrl_gt->i915 if we need it. (Rodrigo). v5: - Switch from series to single patch (Rodrigo). - change function name from pxp_get_kcr_owner_gt to pxp_get_ctrl_gt. - Fix CI BAT failure by removing redundant call to intel_pxp_fini from driver-remove. - NOTE: remaining open still persists on using ptr-to-ptr and back-ptr. v4: - Instead of maintaining intel_pxp as an intel_gt structure member and creating a number of convoluted helpers that takes in i915 as input and redirects to the correct intel_gt or takes any intel_gt and internally replaces with the correct intel_gt, promote it to be a top-level i915 structure. v3: - Rename gt level helper functions to "intel_pxp_is_enabled/ supported/ active_on_gt" (Daniele) - Upgrade _gt_supports_pxp to replace what was intel_gtpxp_is supported as the new intel_pxp_is_supported_on_gt to check for PXP feature support vs the tee support for huc authentication. Fix pxp-debugfs-registration to use only the former to decide support. (Daniele) - Couple minor optimizations. v2: - Avoid introduction of new device info or gt variables and use existing checks / macros to differentiate the correct GT->PXP control ownership (Daniele Ceraolo Spurio) - Don't reuse the updated global-checkers for per-GT callers (such as other files within PXP) to avoid unnecessary GT-reparsing, expose a replacement helper like the prior ones. (Daniele). v1: - Add one more patch to the series for the intel_pxp suspend/resume for similar refactoring References: https://patchwork.freedesktop.org/patch/msgid/20221202011407.4068371-1-alan.previn.teres.alexis@intel.com Signed-off-by: Alan Previn <alan.previn.teres.alexis@intel.com> Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221208180542.998148-1-alan.previn.teres.alexis@intel.com
2022-12-06drm/ttm: merge ttm_bo_api.h and ttm_bo_driver.h v2Christian König
Merge and cleanup the two headers into a single description of the object API. Also move all the documentation to the implementation and drop unnecessary includes from the header. No functional change. v2: minimal checkpatch.pl cleanup Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221125102137.1801-4-christian.koenig@amd.com
2022-12-06drm/i915: Refine VT-d scanout workaroundChris Wilson
VT-d may cause overfetch of the scanout PTE, both before and after the vma (depending on the scanout orientation). bspec recommends that we provide a tile-row in either directions, and suggests using 168 PTE, warning that the accesses will wrap around the ends of the GGTT. Currently, we fill the entire GGTT with scratch pages when using VT-d to always ensure there are valid entries around every vma, including scanout. However, writing every PTE is slow as on recent devices we perform 8MiB of uncached writes, incurring an extra 100ms during resume. If instead we focus on only putting guard pages around scanout, we can avoid touching the whole GGTT. To avoid having to introduce extra nodes around each scanout vma, we adjust the scanout drm_mm_node to be smaller than the allocated space, and fixup the extra PTE during dma binding. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Tejas Upadhyay <tejaskumarx.surendrakumar.upadhyay@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221130235805.221010-5-andi.shyti@linux.intel.com
2022-12-06drm/i915: Wrap all access to i915_vma.node.start|sizeChris Wilson
We already wrap i915_vma.node.start for use with the GGTT, as there we can perform additional sanity checks that the node belongs to the GGTT and fits within the 32b registers. In the next couple of patches, we will introduce guard pages around the objects _inside_ the drm_mm_node allocation. That is we will offset the vma->pages so that the first page is at drm_mm_node.start + vma->guard (not 0 as is currently the case). All users must then not use i915_vma.node.start directly, but compute the guard offset, thus all users are converted to use a i915_vma_offset() wrapper. The notable exceptions are the selftests that are testing exact behaviour of i915_vma_pin/i915_vma_insert. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Tejas Upadhyay <tejaskumarx.surendrakumar.upadhyay@intel.com> Co-developed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221130235805.221010-3-andi.shyti@linux.intel.com
2022-12-06drm/i915: Limit the display memory alignment to 32 bit instead of 64Andi Shyti
The coming commit "drm/i915: Introduce guard pages to i915_vma" from Chris, was originally changing display_alignment to u32 from u64. The reason is that the display GGTT is and will be limited o 4GB. Put it in a separate patch and use "max(...)" instead of "max_t(64, ...)" when asigning the value. We can safely use max as we know beforehand that the comparison is between two u32 variables. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221130235805.221010-2-andi.shyti@linux.intel.com
2022-11-23Merge tag 'drm-intel-next-2022-11-18' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-intel into drm-next GVT Changes: - gvt-next stuff mostly with refactor for the new MDEV interface. i915 Changes: - PSR fixes and improvements (Jouni) - DP DSC fixes (Vinod, Jouni) - More general display cleanups (Jani) - More display collor management cleanup targetting degamma (Ville) - remove circ_buf.h includes (Jiri) - wait power off delay at driver remove to optimize probe (Jani) - More audio cleanup targeting the ELD precompute readout (Ville) - Enable DC power states on all eDP ports (Imre) - RPL-P stepping info (Matt Atwood) - MTL enabling patches (RK) - Removal of DG2 force_probe (Matt) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/Y3f71obyEkImXoUF@intel.com
2022-11-22Merge tag 'drm-intel-gt-next-2022-11-18' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-intel into drm-next Core Changes: - Backmerge of drm-next Driver Changes: - Restore probe_range behaviour for userptr (Matt A) - Fix use-after-free on lmem_userfault_list (Matt A) - Never purge busy TTM objects (Matt A) - Meteorlake enabling (Daniele, Badal, Daniele, Stuart, Aravind, Alan) - Demote GuC kernel contexts to normal priority (John) - Use RC6 residency types as arguments to residency functions (Ashutosh, Rodrigo, Jani) - Convert some legacy DRM debugging macros to new ones (Tvrtko) - Don't deadlock GuC busyness stats vs reset (John) - Remove excessive line feeds in GuC state dumps (John) - Use i915_sg_dma_sizes() for all backends (Matt A) - Prefer REG_FIELD_GET in intel_rps_get_cagf (Ashutosh, Rodrigo) - Use GEN12_RPSTAT register for GT freq (Don, Badal, Ashutosh) - Remove unwanted TTM ghost obj check (Matt A) - Update workaround documentation (Lucas) - Coding style and static checker fixes and cleanups (Jani, Umesh, Tvrtko, Lucas, Andrzej) - Selftest improvements (Chris, Daniele, Riana, Andrzej) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/Y3dMd9HDpfDehhWm@jlahtine-mobl.ger.corp.intel.com
2022-11-21drm/i915/ttm: never purge busy objectsMatthew Auld
In i915_gem_madvise_ioctl() we immediately purge the object is not currently used, like when the mm.pages are NULL. With shmem the pages might still be hanging around or are perhaps swapped out. Similarly with ttm we might still have the pages hanging around on the ttm resource, like with lmem or shmem, but here we need to be extra careful since async unbinds are possible as well as in-progress kernel moves. In i915_ttm_purge() we expect the pipeline-gutting to nuke the ttm resource for us, however if it's busy the memory is only moved to a ghost object, which then leads to broken behaviour when for example clearing the i915_tt->filp, since the actual ttm_tt is still alive and populated, even though it's been moved to the ghost object. When we later destroy the ghost object we hit the following, since the filp is now NULL: [ +0.006982] #PF: supervisor read access in kernel mode [ +0.005149] #PF: error_code(0x0000) - not-present page [ +0.005147] PGD 11631d067 P4D 11631d067 PUD 115972067 PMD 0 [ +0.005676] Oops: 0000 [#1] PREEMPT SMP NOPTI [ +0.012962] Workqueue: events ttm_device_delayed_workqueue [ttm] [ +0.006022] RIP: 0010:i915_ttm_tt_unpopulate+0x3a/0x70 [i915] [ +0.005879] Code: 89 fb 48 85 f6 74 11 8b 55 4c 48 8b 7d 30 45 31 c0 31 c9 e8 18 6a e5 e0 80 7d 60 00 74 20 48 8b 45 68 8b 55 08 4c 89 e7 5b 5d <48> 8b 40 20 83 e2 01 41 5c 89 d1 48 8b 70 30 e9 42 b2 ff ff 4c 89 [ +0.018782] RSP: 0000:ffffc9000bf6fd70 EFLAGS: 00010202 [ +0.005244] RAX: 0000000000000000 RBX: ffff8883e12ae380 RCX: 0000000000000000 [ +0.007150] RDX: 000000008000000e RSI: ffffffff823559b4 RDI: ffff8883e12ae3c0 [ +0.007142] RBP: ffff888103b65d48 R08: 0000000000000001 R09: 0000000000000001 [ +0.007144] R10: 0000000000000001 R11: ffff88829c2c8040 R12: ffff8883e12ae3c0 [ +0.007148] R13: 0000000000000001 R14: ffff888115184140 R15: ffff888115184248 [ +0.007154] FS: 0000000000000000(0000) GS:ffff88844db00000(0000) knlGS:0000000000000000 [ +0.008108] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ +0.005763] CR2: 0000000000000020 CR3: 000000013fdb4004 CR4: 00000000003706e0 [ +0.007152] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ +0.007145] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ +0.007154] Call Trace: [ +0.002459] <TASK> [ +0.002126] ttm_tt_unpopulate.part.0+0x17/0x70 [ttm] [ +0.005068] ttm_bo_tt_destroy+0x1c/0x50 [ttm] [ +0.004464] ttm_bo_cleanup_memtype_use+0x25/0x40 [ttm] [ +0.005244] ttm_bo_cleanup_refs+0x90/0x2c0 [ttm] [ +0.004721] ttm_bo_delayed_delete+0x235/0x250 [ttm] [ +0.004981] ttm_device_delayed_workqueue+0x13/0x40 [ttm] [ +0.005422] process_one_work+0x248/0x560 [ +0.004028] worker_thread+0x4b/0x390 [ +0.003682] ? process_one_work+0x560/0x560 [ +0.004199] kthread+0xeb/0x120 [ +0.003163] ? kthread_complete_and_exit+0x20/0x20 [ +0.004815] ret_from_fork+0x1f/0x30 v2: - Just use ttm_bo_wait() directly (Niranjana) - Add testcase reference Testcase: igt@gem_madvise@dontneed-evict-race Fixes: 213d50927763 ("drm/i915/ttm: Introduce a TTM i915 gem object backend") Reported-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Andrzej Hajda <andrzej.hajda@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Cc: <stable@vger.kernel.org> # v5.15+ Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Acked-by: Nirmoy Das <Nirmoy.Das@intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221115104620.120432-1-matthew.auld@intel.com (cherry picked from commit 5524b5e52e08f675116a93296fe5bee60bc43c03) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2022-11-18treewide: use get_random_u32_below() instead of deprecated functionJason A. Donenfeld
This is a simple mechanical transformation done by: @@ expression E; @@ - prandom_u32_max + get_random_u32_below (E) Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Darrick J. Wong <djwong@kernel.org> # for xfs Reviewed-by: SeongJae Park <sj@kernel.org> # for damon Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> # for infiniband Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> # for arm Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # for mmc Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-11-16drm/i915/ttm: never purge busy objectsMatthew Auld
In i915_gem_madvise_ioctl() we immediately purge the object is not currently used, like when the mm.pages are NULL. With shmem the pages might still be hanging around or are perhaps swapped out. Similarly with ttm we might still have the pages hanging around on the ttm resource, like with lmem or shmem, but here we need to be extra careful since async unbinds are possible as well as in-progress kernel moves. In i915_ttm_purge() we expect the pipeline-gutting to nuke the ttm resource for us, however if it's busy the memory is only moved to a ghost object, which then leads to broken behaviour when for example clearing the i915_tt->filp, since the actual ttm_tt is still alive and populated, even though it's been moved to the ghost object. When we later destroy the ghost object we hit the following, since the filp is now NULL: [ +0.006982] #PF: supervisor read access in kernel mode [ +0.005149] #PF: error_code(0x0000) - not-present page [ +0.005147] PGD 11631d067 P4D 11631d067 PUD 115972067 PMD 0 [ +0.005676] Oops: 0000 [#1] PREEMPT SMP NOPTI [ +0.012962] Workqueue: events ttm_device_delayed_workqueue [ttm] [ +0.006022] RIP: 0010:i915_ttm_tt_unpopulate+0x3a/0x70 [i915] [ +0.005879] Code: 89 fb 48 85 f6 74 11 8b 55 4c 48 8b 7d 30 45 31 c0 31 c9 e8 18 6a e5 e0 80 7d 60 00 74 20 48 8b 45 68 8b 55 08 4c 89 e7 5b 5d <48> 8b 40 20 83 e2 01 41 5c 89 d1 48 8b 70 30 e9 42 b2 ff ff 4c 89 [ +0.018782] RSP: 0000:ffffc9000bf6fd70 EFLAGS: 00010202 [ +0.005244] RAX: 0000000000000000 RBX: ffff8883e12ae380 RCX: 0000000000000000 [ +0.007150] RDX: 000000008000000e RSI: ffffffff823559b4 RDI: ffff8883e12ae3c0 [ +0.007142] RBP: ffff888103b65d48 R08: 0000000000000001 R09: 0000000000000001 [ +0.007144] R10: 0000000000000001 R11: ffff88829c2c8040 R12: ffff8883e12ae3c0 [ +0.007148] R13: 0000000000000001 R14: ffff888115184140 R15: ffff888115184248 [ +0.007154] FS: 0000000000000000(0000) GS:ffff88844db00000(0000) knlGS:0000000000000000 [ +0.008108] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ +0.005763] CR2: 0000000000000020 CR3: 000000013fdb4004 CR4: 00000000003706e0 [ +0.007152] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ +0.007145] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ +0.007154] Call Trace: [ +0.002459] <TASK> [ +0.002126] ttm_tt_unpopulate.part.0+0x17/0x70 [ttm] [ +0.005068] ttm_bo_tt_destroy+0x1c/0x50 [ttm] [ +0.004464] ttm_bo_cleanup_memtype_use+0x25/0x40 [ttm] [ +0.005244] ttm_bo_cleanup_refs+0x90/0x2c0 [ttm] [ +0.004721] ttm_bo_delayed_delete+0x235/0x250 [ttm] [ +0.004981] ttm_device_delayed_workqueue+0x13/0x40 [ttm] [ +0.005422] process_one_work+0x248/0x560 [ +0.004028] worker_thread+0x4b/0x390 [ +0.003682] ? process_one_work+0x560/0x560 [ +0.004199] kthread+0xeb/0x120 [ +0.003163] ? kthread_complete_and_exit+0x20/0x20 [ +0.004815] ret_from_fork+0x1f/0x30 v2: - Just use ttm_bo_wait() directly (Niranjana) - Add testcase reference Testcase: igt@gem_madvise@dontneed-evict-race Fixes: 213d50927763 ("drm/i915/ttm: Introduce a TTM i915 gem object backend") Reported-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Andrzej Hajda <andrzej.hajda@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Cc: <stable@vger.kernel.org> # v5.15+ Reviewed-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Acked-by: Nirmoy Das <Nirmoy.Das@intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221115104620.120432-1-matthew.auld@intel.com
2022-11-16drm/i915: Remove unwanted ghost obj checkNirmoy Das
vm_fault_ttm() should not expect ttm ghost obj so remove that check. Suggested-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221024144558.27747-1-nirmoy.das@intel.com
2022-11-16drm/i915/selftests: add igt_vma_move_to_active_unlockedAndrzej Hajda
All calls to i915_vma_move_to_active are surrounded by vma lock and/or there are multiple local helpers for it in particular tests. Let's replace it by common helper. The patch should not introduce functional changes. Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221019215906.295296-3-andrzej.hajda@intel.com
2022-11-16drm/i915: call i915_request_await_object from _i915_vma_move_to_activeAndrzej Hajda
Since almost all calls to i915_vma_move_to_active are prepended with i915_request_await_object, let's call the latter from _i915_vma_move_to_active by default and add flag allowing bypassing it. Adjust all callers accordingly. The patch should not introduce functional changes. Signed-off-by: Andrzej Hajda <andrzej.hajda@intel.com> Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221019215906.295296-2-andrzej.hajda@intel.com
2022-11-14Merge drm/drm-next into drm-intel-nextRodrigo Vivi
Catch up on 6.1-rc cycle in order to solve the intel_backlight conflict on linux-next. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
2022-11-14drm/i915/ttm: fix uaf with lmem_userfault_list handlingMatthew Auld
In the fault handler, make sure we check if the BO maps lmem after we schedule the migration, since the current resource might change from lmem to smem, if the pages are in the non-cpu visible portion of lmem. This then leads to adding the object to the lmem_userfault_list even though the current resource is no longer lmem. If we then destroy the object, the list might still contain a link to the now free object, since we only remove it if the object is still in lmem. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7469 Fixes: ad74457a6b5a ("drm/i915/dgfx: Release mmap on rpm suspend") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Anshuman Gupta <anshuman.gupta@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Andrzej Hajda <andrzej.hajda@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221107165414.56970-1-matthew.auld@intel.com (cherry picked from commit 625b74460ec0978979f883fbee117e1b97e6e35e) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2022-11-11drm: Assert held reservation lock for dma-buf mmappingDmitry Osipenko
When userspace mmaps dma-buf's fd, the dma-buf reservation lock must be held. Add locking sanity checks to the dma-buf mmaping callbacks of DRM drivers to ensure that the locking assumptions won't regress in future. Suggested-by: Daniel Vetter <daniel@ffwll.ch> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Acked-by: Christian König <christian.koenig@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221110201349.351294-3-dmitry.osipenko@collabora.com
2022-11-11drm/i915: stop including i915_irq.h from i915_trace.hJani Nikula
Turns out many of the files that need i915_reg.h get it implicitly via {display/intel_de.h, gt/intel_context.h} -> i915_trace.h -> i915_irq.h -> i915_reg.h. Since i915_trace.h doesn't actually need i915_irq.h, makes sense to drop it, but that requires adding quite a few new includes all over the place. Prefer including i915_reg.h where needed instead of adding another implicit include, because eventually we'll want to split up i915_reg.h and only include the specific registers at each place. Also some places actually needed i915_irq.h too. Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Jani Nikula <jani.nikula@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/6e78a2e0ac1bffaf5af3b5ccc21dff05e6518cef.1668008071.git.jani.nikula@intel.com
2022-11-10drm/i915: Partial abandonment of legacy DRM logging macrosTvrtko Ursulin
Convert some usages of legacy DRM logging macros into versions which tell us on which device have the events occurred. v2: * Don't have struct drm_device as local. (Jani, Ville) v3: * Store gt, not i915, in workaround list. (John) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Acked-by: Jani Nikula <jani.nikula@intel.com> Cc: Jani Nikula <jani.nikula@intel.com> Cc: John Harrison <John.C.Harrison@Intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221109104633.2579245-1-tvrtko.ursulin@linux.intel.com
2022-11-09drm/i915: use i915_sg_dma_sizes() for all backendsMatthew Auld
We rely on page_sizes.sg in setup_scratch_page() reporting the correct value if the underlying sgl is not contiguous, however in get_pages_internal() we are only looking at the layout of the created pages when calculating the sg_page_sizes, and not the final sgl, which could in theory be completely different. In such a situation we might incorrectly think we have a 64K scratch page, when it is actually only 4K or similar split over multiple non-contiguous entries, which could lead to broken behaviour when touching the scratch space within the padding of a 64K GTT page-table. For most of the other backends we already just call i915_sg_dma_sizes() on the final mapping, so rather just move that into __i915_gem_object_set_pages() to avoid such issues coming back to bite us later. v2: Update missing conversion in gvt Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Stuart Summers <stuart.summers@intel.com> Cc: Andrzej Hajda <andrzej.hajda@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221108103238.165447-1-matthew.auld@intel.com
2022-11-08drm/i915/ttm: add some sanity checks for lmem_userfault_listMatthew Auld
Rather than getting some hard to debug uaf, add some warns to hopefully catch issues with userfault_count being non-zero when destroying the object. Also if we somehow add an object to lmem_userfault_list that somehow doesn't map lmem. References: https://gitlab.freedesktop.org/drm/intel/-/issues/7469 Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Anshuman Gupta <anshuman.gupta@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Andrzej Hajda <andrzej.hajda@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221107165414.56970-2-matthew.auld@intel.com
2022-11-08drm/i915/ttm: fix uaf with lmem_userfault_list handlingMatthew Auld
In the fault handler, make sure we check if the BO maps lmem after we schedule the migration, since the current resource might change from lmem to smem, if the pages are in the non-cpu visible portion of lmem. This then leads to adding the object to the lmem_userfault_list even though the current resource is no longer lmem. If we then destroy the object, the list might still contain a link to the now free object, since we only remove it if the object is still in lmem. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7469 Fixes: ad74457a6b5a ("drm/i915/dgfx: Release mmap on rpm suspend") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Anshuman Gupta <anshuman.gupta@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Andrzej Hajda <andrzej.hajda@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221107165414.56970-1-matthew.auld@intel.com
2022-11-07drm/i915/userptr: restore probe_range behaviourMatthew Auld
The conversion looks harmless, however the addr value is updated inside the loop with the previous vm_end, which then incorrectly leads to for_each_vma_range() iterating over stuff outside the range we care about. Fix this by storing the end value separately. Also fix the case where the range doesn't intersect with any vma, or if the vma itself doesn't extend the entire range, which must mean we have hole at the end. Both should result in an error, as per the previous behaviour. v2: Fix the cases where the range is empty, or if there's a hole at the end of the range Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7247 Testcase: igt@gem_userptr_blits@probe Fixes: f683b9d61319 ("i915: use the VMA iterator") Reported-by: kernel test robot <oliver.sang@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Liam R. Howlett <Liam.Howlett@Oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yu Zhao <yuzhao@google.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221028130635.465839-1-matthew.auld@intel.com (cherry picked from commit 6f7de35b50860c345babf8ed0aa0d75f9315eee4) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2022-11-07drm/i915: Do not set cache_dirty for DGFXNiranjana Vishwanathapura
Currently on DG1, which does not have LLC, we hit the below warning while rebinding an userptr invalidated object. WARNING: CPU: 4 PID: 13008 at drivers/gpu/drm/i915/gem/i915_gem_pages.c:34 __i915_gem_object_set_pages+0x296/0x2d0 [i915] ... RIP: 0010:__i915_gem_object_set_pages+0x296/0x2d0 [i915] ... Call Trace: <TASK> i915_gem_userptr_get_pages+0x175/0x1a0 [i915] ____i915_gem_object_get_pages+0x32/0xb0 [i915] i915_gem_object_userptr_submit_init+0x286/0x470 [i915] eb_lookup_vmas+0x2ff/0xcf0 [i915] ? __intel_wakeref_get_first+0x55/0xb0 [i915] i915_gem_do_execbuffer+0x785/0x21d0 [i915] i915_gem_execbuffer2_ioctl+0xe7/0x3d0 [i915] We shouldn't be setting the obj->cache_dirty for DGFX, fix it. Fixes: d70af57944a1 ("drm/i915/shmem: ensure flush during swap-in on non-LLC") Suggested-by: Matthew Auld <matthew.auld@intel.com> Reported-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Acked-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221102051416.27327-1-niranjana.vishwanathapura@intel.com (cherry picked from commit 0aeec60c76ca2631696b4228f3fc99fe3a80013d) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2022-11-07drm/i915/dmabuf: fix sg_table handling in map_dma_bufMatthew Auld
We need to iterate over the original entries here for the sg_table, pulling out the struct page for each one, to be remapped. However currently this incorrectly iterates over the final dma mapped entries, which is likely just one gigantic sg entry if the iommu is enabled, leading to us only mapping the first struct page (and any physically contiguous pages following it), even if there is potentially lots more data to follow. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7306 Fixes: 1286ff739773 ("i915: add dmabuf/prime buffer sharing support.") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Michael J. Ruhl <michael.j.ruhl@intel.com> Cc: <stable@vger.kernel.org> # v3.5+ Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221028155029.494736-1-matthew.auld@intel.com (cherry picked from commit 28d52f99bbca7227008cf580c9194c9b3516968e) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2022-11-04drm/i915/guc: Don't deadlock busyness stats vs resetJohn Harrison
The engine busyness stats has a worker function to do things like 64bit extend the 32bit hardware counters. The GuC's reset prepare function flushes out this worker function to ensure no corruption happens during the reset. Unforunately, the worker function has an infinite wait for active resets to finish before doing its work. Thus a deadlock would occur if the worker function had actually started just as the reset starts. The function being used to lock the reset-in-progress mutex is called intel_gt_reset_trylock(). However, as noted it does not follow standard 'trylock' conventions and exit if already locked. So rename the current _trylock function to intel_gt_reset_lock_interruptible(), which is the behaviour it actually provides. In addition, add a new implementation of _trylock and call that from the busyness stats worker instead. v2: Rename existing trylock to interruptible rather than trying to preserve the existing (confusing) naming scheme (review comments from Tvrtko). Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Reviewed-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221102192109.2492625-3-John.C.Harrison@Intel.com
2022-11-04drm/i915/userptr: restore probe_range behaviourMatthew Auld
The conversion looks harmless, however the addr value is updated inside the loop with the previous vm_end, which then incorrectly leads to for_each_vma_range() iterating over stuff outside the range we care about. Fix this by storing the end value separately. Also fix the case where the range doesn't intersect with any vma, or if the vma itself doesn't extend the entire range, which must mean we have hole at the end. Both should result in an error, as per the previous behaviour. v2: Fix the cases where the range is empty, or if there's a hole at the end of the range Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7247 Testcase: igt@gem_userptr_blits@probe Fixes: f683b9d61319 ("i915: use the VMA iterator") Reported-by: kernel test robot <oliver.sang@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Liam R. Howlett <Liam.Howlett@Oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Yu Zhao <yuzhao@google.com> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221028130635.465839-1-matthew.auld@intel.com
2022-11-04Merge tag 'drm-intel-gt-next-2022-11-03' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-intel into drm-next Driver Changes: - Fix for #7306: [Arc A380] white flickering when using arc as a secondary gpu (Matt A) - Add Wa_18017747507 for DG2 (Wayne) - Avoid spurious WARN on DG1 due to incorrect cache_dirty flag (Niranjana, Matt A) - Corrections to CS timestamp support for Gen5 and earlier (Ville) - Fix a build error used with clang compiler on hwmon (GG) - Improvements to LMEM handling with RPM (Anshuman, Matt A) - Cleanups in dmabuf code (Mike) - Selftest improvements (Matt A) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/Y2N11wu175p6qeEN@jlahtine-mobl.ger.corp.intel.com
2022-11-04Merge tag 'drm-misc-next-2022-11-03' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-misc into drm-next drm-misc-next for 6.2: UAPI Changes: Cross-subsystem Changes: - dma-buf: locking improvements - firmware: New API in the RaspberryPi firmware driver used by vc4 Core Changes: - client: Null pointer dereference fix in drm_client_buffer_delete() - mm/buddy: Add back random seed log - ttm: Convert ttm_resource to use size_t for its size, fix for an undefined behaviour Driver Changes: - bridge: - adv7511: use dev_err_probe - it6505: Fix return value check of pm_runtime_get_sync - panel: - sitronix: Fixes and clean-ups - lcdif: Increase DMA burst size - rockchip: runtime_pm improvements - vc4: Fix for a regression preventing the use of 4k @ 60Hz, and further HDMI rate constraints check. - vmwgfx: Cursor improvements Signed-off-by: Dave Airlie <airlied@redhat.com> From: Maxime Ripard <maxime@cerno.tech> Link: https://patchwork.freedesktop.org/patch/msgid/20221103083437.ksrh3hcdvxaof62l@houat
2022-11-02drm/i915: Do not set cache_dirty for DGFXNiranjana Vishwanathapura
Currently on DG1, which does not have LLC, we hit the below warning while rebinding an userptr invalidated object. WARNING: CPU: 4 PID: 13008 at drivers/gpu/drm/i915/gem/i915_gem_pages.c:34 __i915_gem_object_set_pages+0x296/0x2d0 [i915] ... RIP: 0010:__i915_gem_object_set_pages+0x296/0x2d0 [i915] ... Call Trace: <TASK> i915_gem_userptr_get_pages+0x175/0x1a0 [i915] ____i915_gem_object_get_pages+0x32/0xb0 [i915] i915_gem_object_userptr_submit_init+0x286/0x470 [i915] eb_lookup_vmas+0x2ff/0xcf0 [i915] ? __intel_wakeref_get_first+0x55/0xb0 [i915] i915_gem_do_execbuffer+0x785/0x21d0 [i915] i915_gem_execbuffer2_ioctl+0xe7/0x3d0 [i915] We shouldn't be setting the obj->cache_dirty for DGFX, fix it. Fixes: d70af57944a1 ("drm/i915/shmem: ensure flush during swap-in on non-LLC") Suggested-by: Matthew Auld <matthew.auld@intel.com> Reported-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Acked-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221102051416.27327-1-niranjana.vishwanathapura@intel.com
2022-11-01Merge tag 'drm-intel-next-2022-10-28' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-intel into drm-next - Hotplug code clean-up and organization (Jani, Gustavo) - More VBT specific code clean-up, doc, organization, and improvements (Ville) - More MTL enabling work (Matt, RK, Anusha, Jose) - FBC related clean-ups and improvements (Ville) - Removing unused sw_fence_await_reservation (Niranjana) - Big chunch of display house clean-up (Ville) - Many Watermark fixes and clean-ups (Ville) - Fix device info for devices without display (Jani) - Fix TC port PLLs after readout (Ville) - DPLL ID clean-ups (Ville) - Prep work for finishing (de)gamma readout (Ville) - PSR fixes and improvements (Jouni, Jose) - Reject excessive dotclocks early (Ville) - DRRS related improvements (Ville) - Simplify uncore register updates (Andrzej) - Fix simulated GPU reset wrt. encoder HW readout (Imre) - Add a ADL-P workaround (Jose) - Fix clear mask in GEN7_MISCCPCTL update (Andrzej) - Temporarily disable runtime_pm for discrete (Anshuman) - Improve fbdev debugs (Nirmoy) - Fix DP FRL link training status (Ankit) - Other small display fixes (Ankit, Suraj) - Allow panel fixed modes to have differing sync polarities (Ville) - Clean up crtc state flag checks (Ville) - Fix race conditions during DKL PHY accesses (Imre) - Prep-work for cdclock squash and crawl modes (Anusha) - ELD precompute and readout (Ville) Signed-off-by: Dave Airlie <airlied@redhat.com> From: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/Y1wd6ZJ8LdJpCfZL@intel.com
2022-10-31drm/i915/dmabuf: Use scatterlist for_each_sg APIMichael J. Ruhl
Update open coded for loop to use the standard scatterlist for_each_sg API. Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221028155029.494736-4-matthew.auld@intel.com
2022-10-31drm/i915/dmabuf: dmabuf cleanupMichael J. Ruhl
Some minor cleanup of some variables for consistency. Normalize struct sg_table to sgt. Normalize struct dma_buf_attachment to attach. checkpatch issues sizeof(), !NULL updates. Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221028155029.494736-3-matthew.auld@intel.com
2022-10-31drm/i915/selftests: exercise GPU access from the importerMatthew Auld
Using PAGE_SIZE here potentially hides issues so bump that to something larger. This should also make it possible for iommu to coalesce entries for us. With that in place verify we can write from the GPU using the importers sg_table, followed by checking that our writes match when read from the CPU side. v2: Switch over to igt_gpu_fill_dw(), which looks to be more widely supported than the migrate stuff (at least OOTB). References: https://gitlab.freedesktop.org/drm/intel/-/issues/7306 Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Michael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221028155029.494736-2-matthew.auld@intel.com
2022-10-31drm/i915/dmabuf: fix sg_table handling in map_dma_bufMatthew Auld
We need to iterate over the original entries here for the sg_table, pulling out the struct page for each one, to be remapped. However currently this incorrectly iterates over the final dma mapped entries, which is likely just one gigantic sg entry if the iommu is enabled, leading to us only mapping the first struct page (and any physically contiguous pages following it), even if there is potentially lots more data to follow. Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/7306 Fixes: 1286ff739773 ("i915: add dmabuf/prime buffer sharing support.") Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Michael J. Ruhl <michael.j.ruhl@intel.com> Cc: <stable@vger.kernel.org> # v3.5+ Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221028155029.494736-1-matthew.auld@intel.com
2022-10-31drm/i915/dgfx: Grab wakeref at i915_ttm_unmap_virtualAnshuman Gupta
We had already grabbed the rpm wakeref at obj destruction path, but it also required to grab the wakeref when object moves. When i915_gem_object_release_mmap_offset() gets called by i915_ttm_move_notify(), it will release the mmap offset without grabbing the wakeref. We want to avoid that therefore, grab the wakeref at i915_ttm_unmap_virtual() accordingly. While doing that also changed the lmem_userfault_lock from mutex to spinlock, as spinlock widely used for list. Also changed if (obj->userfault_count) to GEM_BUG_ON(!obj->userfault_count). v2: - Removed lmem_userfault_{list,lock} from intel_gt. [Matt Auld] Fixes: ad74457a6b5a ("drm/i915/dgfx: Release mmap on rpm suspend") Suggested-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221027092242.1476080-3-anshuman.gupta@intel.com
2022-10-31drm/i915: Encapsulate lmem rpm stuff in intel_runtime_pmAnshuman Gupta
Runtime pm is not really per GT, therefore it make sense to move lmem_userfault_list, lmem_userfault_lock and userfault_wakeref from intel_gt to intel_runtime_pm structure, which is embedded to i915. No functional change. v2: - Fixes the code comment nit. [Matt Auld] Signed-off-by: Anshuman Gupta <anshuman.gupta@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221027092242.1476080-2-anshuman.gupta@intel.com
2022-10-31drm/i915: stop abusing swiotlb_max_segmentRobert Beckett
swiotlb_max_segment used to return either the maximum size that swiotlb could bounce, or for Xen PV PAGE_SIZE even if swiotlb could bounce buffer larger mappings. This made i915 on Xen PV work as it bypasses the coherency aspect of the DMA API and can't cope with bounce buffering and this avoided bounce buffering for the Xen/PV case. So instead of adding this hack back, check for Xen/PV directly in i915 for the Xen case and otherwise use the proper DMA API helper to query the maximum mapping size. Replace swiotlb_max_segment() calls with dma_max_mapping_size(). In i915_gem_object_get_pages_internal() no longer consider max_segment only if CONFIG_SWIOTLB is enabled. There can be other (iommu related) causes of specific max segment sizes. Fixes: a2daa27c0c61 ("swiotlb: simplify swiotlb_max_segment") Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Signed-off-by: Robert Beckett <bob.beckett@collabora.com> Signed-off-by: Christoph Hellwig <hch@lst.de> [hch: added the Xen hack, rewrote the changelog] Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221020110308.1582518-1-hch@lst.de (cherry picked from commit 78a07fe777c42800bd1adaec12abe5dcee43919e) Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
2022-10-27i915/i915_gem_context: Remove debug message in i915_gem_context_create_ioctlKarolina Drobnik
We know that as long as GEM context create ioctl succeeds, a context was created. There is no need to write about it, especially when such a message heavily pollutes dmesg and makes debugging actual errors harder. Since commit baa89ba3f1fe ("drm/i915/gem: initial conversion to new logging macros using coccinelle"), the logging for creating a new user context was moved under the driver debug output (for lack of a means for per-user logs, and a lack of user-focused drm.debug parameter). This only reveals how obnoxious having that spam be part of the driver debug logs, so remove it. [ from Chris Wilson ] Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Karolina Drobnik <karolina.drobnik@intel.com> Cc: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221025091903.986819-1-karolina.drobnik@intel.com
2022-10-27drm/ttm: rework on ttm_resource to use size_t typeSomalapuram Amaranath
Change ttm_resource structure from num_pages to size_t size in bytes. v1 -> v2: change PFN_UP(dst_mem->size) to ttm->num_pages v1 -> v2: change bo->resource->size to bo->base.size at some places v1 -> v2: remove the local variable v1 -> v2: cleanup cmp_size_smaller_first() v2 -> v3: adding missing PFN_UP in ttm_bo_vm_fault_reserved Signed-off-by: Somalapuram Amaranath <Amaranath.Somalapuram@amd.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221027091237.983582-1-Amaranath.Somalapuram@amd.com Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Christian König <christian.koenig@amd.com>
2022-10-27drm/i915: stop abusing swiotlb_max_segmentRobert Beckett
swiotlb_max_segment used to return either the maximum size that swiotlb could bounce, or for Xen PV PAGE_SIZE even if swiotlb could bounce buffer larger mappings. This made i915 on Xen PV work as it bypasses the coherency aspect of the DMA API and can't cope with bounce buffering and this avoided bounce buffering for the Xen/PV case. So instead of adding this hack back, check for Xen/PV directly in i915 for the Xen case and otherwise use the proper DMA API helper to query the maximum mapping size. Replace swiotlb_max_segment() calls with dma_max_mapping_size(). In i915_gem_object_get_pages_internal() no longer consider max_segment only if CONFIG_SWIOTLB is enabled. There can be other (iommu related) causes of specific max segment sizes. Fixes: a2daa27c0c61 ("swiotlb: simplify swiotlb_max_segment") Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com> Signed-off-by: Robert Beckett <bob.beckett@collabora.com> Signed-off-by: Christoph Hellwig <hch@lst.de> [hch: added the Xen hack, rewrote the changelog] Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221020110308.1582518-1-hch@lst.de
2022-10-26drm/i915/guc: Delay disabling guc_id scheduling for better hysteresisMatthew Brost
Add a delay, configurable via debugfs (default 34ms), to disable scheduling of a context after the pin count goes to zero. Disable scheduling is a costly operation as it requires synchronizing with the GuC. So the idea is that a delay allows the user to resubmit something before doing this operation. This delay is only done if the context isn't closed and less than a given threshold (default is 3/4) of the guc_ids are in use. Alan Previn: Matt Brost first introduced this patch back in Oct 2021. However no real world workload with measured performance impact was available to prove the intended results. Today, this series is being republished in response to a real world workload that benefited greatly from it along with measured performance improvement. Workload description: 36 containers were created on a DG2 device where each container was performing a combination of 720p 3d game rendering and 30fps video encoding. The workload density was configured in a way that guaranteed each container to ALWAYS be able to render and encode no less than 30fps with a predefined maximum render + encode latency time. That means the totality of all 36 containers and their workloads were not saturating the engines to their max (in order to maintain just enough headrooom to meet the min fps and max latencies of incoming container submissions). Problem statement: It was observed that the CPU core processing the i915 soft IRQ work was experiencing severe load. Using tracelogs and an instrumentation patch to count specific i915 IRQ events, it was confirmed that the majority of the CPU cycles were caused by the gen11_other_irq_handler() -> guc_irq_handler() code path. The vast majority of the cycles was determined to be processing a specific G2H IRQ: i.e. INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE. These IRQs are sent by GuC in response to i915 KMD sending H2G requests: INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_SET. Those H2G requests are sent whenever a context goes idle so that we can unpin the context from GuC. The high CPU utilization % symptom was limiting density scaling. Root Cause Analysis: Because the incoming execution buffers were spread across 36 different containers (each with multiple contexts) but the system in totality was NOT saturated to the max, it was assumed that each context was constantly idling between submissions. This was causing a thrashing of unpinning contexts from GuC at one moment, followed quickly by repinning them due to incoming workload the very next moment. These event-pairs were being triggered across multiple contexts per container, across all containers at the rate of > 30 times per sec per context. Metrics: When running this workload without this patch, we measured an average of ~69K INTEL_GUC_ACTION_SCHED_CONTEXT_MODE_DONE events every 10 seconds or ~10 million times over ~25+ mins. With this patch, the count reduced to ~480 every 10 seconds or about ~28K over ~10 mins. The improvement observed is ~99% for the average counts per 10 seconds. Design awareness: Selftest impact. As temporary WA disable this feature for the selftests. Selftests are very timing sensitive and any change in timing can cause failure. A follow up patch will fixup the selftests to understand this delay. Design awareness: Race between guc_request_alloc and guc_context_close. If a context close is issued while there is a request submission in flight and a delayed schedule disable is pending, guc_context_close and guc_request_alloc will race to cancel the delayed disable. To close the race, make sure that guc_request_alloc waits for guc_context_close to finish running before checking any state. Design awareness: GT Reset event. If a gt reset is triggered, as preparation steps, add an additional step to ensure all contexts that have a pending delay-disable-schedule task be flushed of it. Move them directly into the closed state after cancelling the worker. This is okay because the existing flow flushes all yet-to-arrive G2H's dropping them anyway. Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Alan Previn <alan.previn.teres.alexis@intel.com> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Reviewed-by: John Harrison <John.C.Harrison@Intel.com> Signed-off-by: John Harrison <John.C.Harrison@Intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221006225121.826257-2-alan.previn.teres.alexis@intel.com
2022-10-20drm/i915/selftests: Stop using kthread_stop()Tvrtko Ursulin
Since a7c01fa93aeb ("signal: break out of wait loops on kthread_stop()") kthread_stop() started asserting a pending signal which wreaks havoc with a few of our selftests. Mainly because they are not fully expecting to handle signals, but also cutting the intended test runtimes short due signal_pending() now returning true (via __igt_timeout), which therefore breaks both the patterns of: kthread_run() ..sleep for igt_timeout_ms to allow test to exercise stuff.. kthread_stop() And check for errors recorded in the thread. And also: Main thread | Test thread ---------------+------------------------------ kthread_run() | kthread_stop() | do stuff until __igt_timeout | -- exits early due signal -- Where this kthread_stop() was assume would have a "join" semantics, which it would have had if not the new signal assertion issue. To recap, threads are now likely to catch a previously impossible ERESTARTSYS or EINTR, marking the test as failed, or have a pointlessly short run time. To work around this start using kthread_work(er) API which provides an explicit way of waiting for threads to exit. And for cases where parent controls the test duration we add explicit signaling which threads will now use instead of relying on kthread_should_stop(). Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221020130841.3845791-1-tvrtko.ursulin@linux.intel.com
2022-10-20drm/i915: s/HAS_BAR2_SMEM_STOLEN/HAS_LMEMBAR_SMEM_STOLEN/Ville Syrjälä
The fact that LMEMBAR is BAR2 should be of no real interest to anyone. So use the name of the BAR rather than its index. Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221005154159.18750-3-ville.syrjala@linux.intel.com Acked-by: Matthew Auld <matthew.auld@intel.com>
2022-10-19drm/i915: Refactor ttm ghost obj detectionNirmoy Das
Currently i915_ttm_to_gem() returns NULL for ttm ghost object which makes it unclear when we should add a NULL check for a caller of i915_ttm_to_gem() as ttm ghost objects are expected behaviour for certain cases. Create a separate function to detect ttm ghost object and use that in places where we expect a ghost obj from ttm. Signed-off-by: Nirmoy Das <nirmoy.das@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221014131427.21102-1-nirmoy.das@intel.com
2022-10-18Merge drm/drm-next into drm-misc-nextMaxime Ripard
Let's kick-off this release cycle. Signed-off-by: Maxime Ripard <maxime@cerno.tech>
2022-10-18drm/i915: Prepare to dynamic dma-buf locking specificationDmitry Osipenko
Prepare i915 driver to the common dynamic dma-buf locking convention by starting to use the unlocked versions of dma-buf API functions and handling cases where importer now holds the reservation lock. Acked-by: Christian König <christian.koenig@amd.com> Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221017172229.42269-7-dmitry.osipenko@collabora.com
2022-10-16Merge tag 'random-6.1-rc1-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/crng/random Pull more random number generator updates from Jason Donenfeld: "This time with some large scale treewide cleanups. The intent of this pull is to clean up the way callers fetch random integers. The current rules for doing this right are: - If you want a secure or an insecure random u64, use get_random_u64() - If you want a secure or an insecure random u32, use get_random_u32() The old function prandom_u32() has been deprecated for a while now and is just a wrapper around get_random_u32(). Same for get_random_int(). - If you want a secure or an insecure random u16, use get_random_u16() - If you want a secure or an insecure random u8, use get_random_u8() - If you want secure or insecure random bytes, use get_random_bytes(). The old function prandom_bytes() has been deprecated for a while now and has long been a wrapper around get_random_bytes() - If you want a non-uniform random u32, u16, or u8 bounded by a certain open interval maximum, use prandom_u32_max() I say "non-uniform", because it doesn't do any rejection sampling or divisions. Hence, it stays within the prandom_*() namespace, not the get_random_*() namespace. I'm currently investigating a "uniform" function for 6.2. We'll see what comes of that. By applying these rules uniformly, we get several benefits: - By using prandom_u32_max() with an upper-bound that the compiler can prove at compile-time is ≤65536 or ≤256, internally get_random_u16() or get_random_u8() is used, which wastes fewer batched random bytes, and hence has higher throughput. - By using prandom_u32_max() instead of %, when the upper-bound is not a constant, division is still avoided, because prandom_u32_max() uses a faster multiplication-based trick instead. - By using get_random_u16() or get_random_u8() in cases where the return value is intended to indeed be a u16 or a u8, we waste fewer batched random bytes, and hence have higher throughput. This series was originally done by hand while I was on an airplane without Internet. Later, Kees and I worked on retroactively figuring out what could be done with Coccinelle and what had to be done manually, and then we split things up based on that. So while this touches a lot of files, the actual amount of code that's hand fiddled is comfortably small" * tag 'random-6.1-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: prandom: remove unused functions treewide: use get_random_bytes() when possible treewide: use get_random_u32() when possible treewide: use get_random_{u8,u16}() when possible, part 2 treewide: use get_random_{u8,u16}() when possible, part 1 treewide: use prandom_u32_max() when possible, part 2 treewide: use prandom_u32_max() when possible, part 1
2022-10-14drm/i915: enable PS64 support for DG2Matthew Auld
It turns out that on production DG2/ATS HW we should have support for PS64. This feature allows to provide a 64K TLB hint at the PTE level, which is a lot more flexible than the current method of enabling 64K GTT pages for the entire page-table, since that leads to all kinds of annoying restrictions, as documented in: commit caa574ffc4aaf4f29b890223878c63e2e7772f62 Author: Matthew Auld <matthew.auld@intel.com> Date: Sat Feb 19 00:17:49 2022 +0530 drm/i915/uapi: document behaviour for DG2 64K support On discrete platforms like DG2, we need to support a minimum page size of 64K when dealing with device local-memory. This is quite tricky for various reasons, so try to document the new implicit uapi for this. With PS64, we can now drop the 2M GTT alignment restriction, and instead only require 64K or larger when dealing with lmem. We still use the compact-pt layout when possible, but only when we are certain that this doesn't interfere with userspace. Note that this is a change in uAPI behaviour, but hopefully shouldn't be a concern (IGT is at least able to autodetect the alignment), since we are only making the GTT alignment constraint less restrictive. Based on a patch from CQ Tang. v2: update the comment wrt scratch page v3: (Nirmoy) - Fix the selftest to actually use the random size, plus some comment improvements, also drop the rem stuff. Reported-by: Michal Mrozek <michal.mrozek@intel.com> Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Stuart Summers <stuart.summers@intel.com> Cc: Jordan Justen <jordan.l.justen@intel.com> Cc: Yang A Shi <yang.a.shi@intel.com> Cc: Nirmoy Das <nirmoy.das@intel.com> Cc: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Acked-by: Michal Mrozek <michal.mrozek@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20221004114915.221708-1-matthew.auld@intel.com