Age | Commit message (Collapse) | Author |
|
Fix all typos in files under drm/i915/gem reported by codespell tool.
v2: Codespell won't catch it, but it should be
"user defined" and not "use defined". <Krzysztof Niemiec>
Signed-off-by: Nitin Gote <nitin.r.gote@intel.com>
Reviewed-by: Krzysztof Niemiec <krzysztof.niemiec@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20250120081517.3237326-4-nitin.r.gote@intel.com
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
|
|
mem->region is a struct resource, but mem->io_start and
mem->io_size are not for whatever reason. Let's unify this
and convert the io stuff into a struct resource as well.
Should make life a little less annoying when you don't have
juggle between two different approaches all the time.
Mostly done using cocci (with manual tweaks at all the
places where we mutate io_size by hand):
@@
struct intel_memory_region *M;
expression START, SIZE;
@@
- M->io_start = START;
- M->io_size = SIZE;
+ M->io = DEFINE_RES_MEM(START, SIZE);
@@
struct intel_memory_region *M;
@@
- M->io_start
+ M->io.start
@@
struct intel_memory_region M;
@@
- M.io_start
+ M.io.start
@@
expression M;
@@
- M->io_size
+ resource_size(&M->io)
@@
expression M;
@@
- M.io_size
+ resource_size(&M.io)
Reviewed-by: Andrzej Hajda <andrzej.hajda@intel.com>
Acked-by: Nirmoy Das <nirmoy.das@intel.com>
Tested-by: Paz Zcharya <pazz@chromium.org>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240202224340.30647-2-ville.syrjala@linux.intel.com
|
|
Since segment_pages is no longer a compile time constant, it looks the
DIV_ROUND_UP(node->size, segment_pages) breaks the 32b build. Simplest
is just to use the ULL variant, but really we should need not need more
than u32 for the page alignment (also we are limited by that due to the
sg->length type), so also make it all u32.
Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Fixes: bc99f1209f19 ("drm/i915/ttm: fix sg_table construction")
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Nirmoy Das <nirmoy.das@linux.intel.com>
Reviewed-by: Nirmoy Das <nirmoy.das@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220712174050.592550-1-matthew.auld@intel.com
|
|
In order to get the GSC Support merged on drm-intel-gt-next
in a clean fashion we needed this ATS-M patch to avoid
conflict in i915_pci.c:
commit 412c942bdfae ("drm/i915/ats-m: add ATS-M platform info")
--
Fixing a silent conflict on drivers/gpu/drm/i915/gt/intel_gt_gmch.c:
- if (!intel_vtd_active(i915))
+ if (!i915_vtd_active(i915))
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
|
|
Sync up with v5.18-rc1, in particular to get 5e3094cfd9fb
("drm/i915/xehpsdv: Add has_flat_ccs to device info").
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
Remove the uapi/drm/i915_drm.h include from drm/i915_drm.h, and stop
being a proxy for uapi/drm/i915_drm.h. Include uapi/drm/i915_drm.h and
drm/i915_drm.h only where needed.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220311100639.114685-2-jani.nikula@intel.com
|
|
Add a generic interface for allocating an object at some specific
offset, and convert stolen over. Later we will want to hook this up to
different backends.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Nirmoy Das <nirmoy.das@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220315181425.576828-4-matthew.auld@intel.com
|
|
It looks like this code was accidentally dropped at some point(in a
slightly different form), so add it back. The gist is that if we know
the allocation will be one single chunk, then we can just annotate the
BO with I915_BO_ALLOC_CONTIGUOUS, even if the user doesn't bother. In
the future this should allow us to avoid using vmap for such objects,
in some upcoming patches.
v2(Thomas):
- Tweak the commit message to mention the future motivation
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220202173154.3758970-1-matthew.auld@intel.com
|
|
If the user doesn't require CPU access for the buffer, then
ALLOC_GPU_ONLY should be used, in order to prioritise allocating in the
non-mappable portion of LMEM, on devices with small BAR.
v2(Thomas):
- The BO_ALLOC_TOPDOWN naming here is poor, since this is pure lies on
systems that don't even have small BAR. A better name is GPU_ONLY,
which is accurate regardless of the configuration.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Acked-by: Nirmoy Das <nirmoy.das@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220225145502.331818-3-matthew.auld@intel.com
|
|
On DG2 we allow objects that are smaller than the min_page_size, under
the premise that these are never mapped by the GTT, like with the paging
structures. Currently the suspend-resume path will try to map such
objects through the migration vm, which hits:
[ 560.529217] kernel BUG at drivers/gpu/drm/i915/gt/intel_migrate.c:431!
[ 560.536081] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
[ 560.541629] CPU: 4 PID: 2062 Comm: rtcwake Tainted: G W 5.17.0-rc5-demarchi+ #175
[ 560.550716] Hardware name: Intel Corporation CoffeeLake Client Platform/CoffeeLake S UDIMM RVP, BIOS CNLSFWR1.R00.X220.B00.2103302221 03/30/2021
[ 560.563627] RIP: 0010:emit_pte+0x2e7/0x380 [i915]
[ 560.568665] Code: ee 02 48 89 69 04 83 c6 05 83 c0 05 39 f0 0f 4f c6 48 8b 73 08 39 d0 0f 4f c2 44 89 f2 4c 8d 4a ff 49 85 f1 0f 84 62 fe ff ff <0f> 0b 48 c7 03 00 00 00 00 4d 89 c6 8b 01 48 29 ce 48 8d 57 0c 48
[ 560.587691] RSP: 0018:ffffc9000104f8a0 EFLAGS: 00010206
[ 560.592906] RAX: 0000000000000040 RBX: ffffc9000104f908 RCX: ffffc900025114d0
[ 560.600024] RDX: 0000000000010000 RSI: 00000003f9fe2000 RDI: ffffc900025114dc
[ 560.607458] RBP: 0000000001840000 R08: ffff88810f335540 R09: 000000000000ffff
[ 560.614865] R10: 000000000000081b R11: 0000000000000001 R12: 000000000000081b
[ 560.622300] R13: 0000000000000000 R14: 0000000000010000 R15: ffff888107c3e240
[ 560.629716] FS: 00007f5b7c086580(0000) GS:ffff88846dc00000(0000) knlGS:0000000000000000
[ 560.638090] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 560.644132] CR2: 00007f3ab0a133a8 CR3: 000000010a43e003 CR4: 00000000003706e0
[ 560.651590] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 560.659002] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 560.666438] Call Trace:
[ 560.668885] <TASK>
[ 560.670983] intel_context_migrate_copy+0x1b1/0x4c0 [i915]
[ 560.676794] __i915_ttm_move+0x628/0x790 [i915]
[ 560.681704] ? dma_resv_iter_next+0x8f/0xb0
[ 560.686223] ? dma_resv_iter_first+0xe5/0x140
[ 560.690894] ? i915_deps_add_resv+0x4b/0x110 [i915]
[ 560.696147] ? dma_resv_reserve_shared+0x161/0x310
[ 560.701228] i915_gem_obj_copy_ttm+0x10f/0x220 [i915]
[ 560.706650] i915_ttm_backup+0x191/0x2f0 [i915]
[ 560.711558] i915_gem_process_region+0x266/0x3b0 [i915]
[ 560.717153] ? verify_cpu+0xf0/0x100
[ 560.721040] ? pci_pm_resume_early+0x20/0x20
[ 560.725603] i915_ttm_backup_region+0x47/0x70 [i915]
[ 560.730927] i915_gem_backup_suspend+0x141/0x170 [i91
For now let's just force the memcpy path for such objects during
suspend-resume.
Fixes: 00e27ad85bc9 ("drm/i915/migrate: add acceleration support for DG2")
Reported-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20220225103443.225228-1-matthew.auld@intel.com
|
|
There is an interesting refcounting loop:
struct intel_memory_region has a struct ttm_resource_manager,
ttm_resource_manager->move may hold a reference to i915_request,
i915_request may hold a reference to intel_context,
intel_context may hold a reference to drm_i915_gem_object,
drm_i915_gem_object may hold a reference to intel_memory_region.
Break this loop by dropping region reference counting.
In addition, Have regions with a manager moving fence make sure
that all region objects are released before freeing the region.
v6:
- Fix a code comment.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20211122214554.371864-4-thomas.hellstrom@linux.intel.com
|
|
An upcoming common pattern is to traverse the region object list and
perform certain actions on all objects in a region. It's a little tricky
to get the list locking right, in particular since a gem object may
change region unless it's pinned or the object lock is held.
Define a function that does this for us and that takes an argument that
defines the action to be performed on each object.
v3:
- Improve structure documentation a bit (Matthew Auld)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210922062527.865433-3-thomas.hellstrom@linux.intel.com
|
|
For some specialised objects we might need something larger than the
regions min_page_size due to some hw restriction, and slightly more
hairy is needing something smaller with the guarantee that such objects
will never be inserted into any GTT, which is the case for the paging
structures.
This also fixes how we setup the BO page_alignment, if we later migrate
the object somewhere else. For example if the placements are {SMEM,
LMEM}, then we might get this wrong. Pushing the min_page_size behaviour
into the manager should fix this.
v2(Thomas): push the default page size behaviour into buddy_man, and let
the user override it with the page-alignment, which looks cleaner
v3: rebase on ttm sys changes
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210625103824.558481-1-matthew.auld@intel.com
|
|
For discrete, use TTM for both cached and WC system memory. That means
we currently rely on the TTM memory accounting / shrinker. For cached
system memory we should consider remaining shmem-backed, which can be
implemented from our ttm_tt_populate callback. We can then also reuse our
own very elaborate shrinker for that memory.
If an object is evicted to a gem allowable region, we will now consider
the object migrated, and we flip the gem region and move the object to a
different region list. Since we are now changing gem regions, we can't
any longer rely on the CONTIGUOUS flag being set based on the region
min page size, so remove that flag update. If we want to reintroduce it,
we need to put it in the mutable flags.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210624084240.270219-4-thomas.hellstrom@linux.intel.com
|
|
Most logical place to introduce TTM buffer objects is as an i915
gem object backend. We need to add some ops to account for added
functionality like delayed delete and LRU list manipulation.
Initially we support only LMEM and SYSTEM memory, but SYSTEM
(which in this case means evicted LMEM objects) is not
visible to i915 GEM yet. The plan is to move the i915 gem system region
over to the TTM system memory type in upcoming patches.
We set up GPU bindings directly both from LMEM and from the system region,
as there is no need to use the legacy TTM_TT memory type. We reserve
that for future porting of GGTT bindings to TTM.
Remove the old lmem backend.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210610070152.572423-2-thomas.hellstrom@linux.intel.com
|
|
Temporarily remove the buddy allocator and related selftests
and hook up the TTM range manager for i915 regions.
Also modify the mock region selftests somewhat to account for a
fragmenting manager.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210602083818.241793-2-thomas.hellstrom@linux.intel.com
|
|
For some internal device local-memory objects it would be useful to have
an option to CPU clear the pages upon gathering the backing store. Note
that this might be before the blitter is useable, which is the case for
some internal GuC objects.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Lionel Landwerlin <lionel.g.landwerlin@linux.intel.com>
Cc: Jon Bloomfield <jon.bloomfield@intel.com>
Cc: Jordan Justen <jordan.l.justen@intel.com>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Kenneth Graunke <kenneth@whitecape.org>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Cc: Dave Airlie <airlied@gmail.com>
Cc: dri-devel@lists.freedesktop.org
Cc: mesa-dev@lists.freedesktop.org
Acked-by: Kenneth Graunke <kenneth@whitecape.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20210429103056.407067-7-matthew.auld@intel.com
|
|
git://anongit.freedesktop.org/drm/drm-intel into drm-next
Driver Changes:
- Prepare for local/device memory support on DG1 by starting
to use it for kernel internal allocations: context, ring
and engine scratch (Matt A, CQ, Abdiel, Imre)
- Sandybridge fix to avoid hard hang on ring resume (Chris)
- Limit imported dma-buf size to int32 (Matt A)
- Double check heartbeat timeout before resetting (Chris)
- Use new tasklet API for execution list (Emil)
- Fix SPDX checkpats warnings (Chris)
- Fixes for various checkpatch warnings (Chris)
- Selftest improvements (Chris)
- Move the defer_request waiter active assertion to correct spot (Chris)
- Make local-memory probing a GT operation (Matt, Tvrtko)
- Protect against request freeing during cancellation on wedging (Chris)
- Retire unexpected starting state error dumping (Chris)
- Distinction of memory regions in debugging (Zbigniew)
- Always flush the submission queue on checking for idle (Chris)
- Consolidate 2big error check to helper (Matt)
- Decrease number of subplatform bits (Tvrtko)
- Remove unused internal request priority levels (Chris)
- Document the unused internal header bits in buddy allocator (Matt)
- Cleanup the region class/instance encoding (Matt)
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/YGxksaZGXHnFxlwg@jlahtine-mobl.ger.corp.intel.com
|
|
Throw it into a simple helper, and throw a warning if we encounter an
object which has been initialised with an object size that exceeds our
limit of INT_MAX pages.
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20210122181514.541436-2-matthew.auld@intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
We want to remove the changing of ops structure for attaching
phys pages, so we need to kill off HAS_STRUCT_PAGE from ops->flags,
and put it in the bo.
This will remove a potential race of dereferencing the wrong obj->ops
without ww mutex held.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
[danvet: apply with wiggle]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20210323155059.628690-8-maarten.lankhorst@linux.intel.com
|
|
Sanity check the object size before allocating a new gem object.
Fixes: 97d553963250 ("drm/i915/region: convert object_create into object_init")
Testcase: igt/gem_create/create-massive
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20210120104714.112812-1-matthew.auld@intel.com
|
|
Give more flexibility to the caller, if they already have an allocated
object, in case they wish to apply some transformation to the object
prior to handing it over to the region specific initialisation step,
like in gem_create_ext where we would like to first apply the extensions
to the object.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20210114182402.840247-3-matthew.auld@intel.com
|
|
Mixing I915_ALLOC_CONTIGUOUS and I915_ALLOC_MAX_SEGMENT_SIZE fared
badly. The two directives conflict, with the contiguous request setting
the min_order to the full size of the object, and the max-segment-size
setting the max_order to the limit of the DMA mapper. This results in a
situation where max_order < min_order, causing our sanity checks to
fail.
Instead of limiting the buddy block size, in the previous patch we split
the oversized buddy into multiple scatterlist elements.
Fixes: d2cf0125d4a1 ("drm/i915/lmem: Limit block size to 4G")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201202173444.14903-2-chris@chris-wilson.co.uk
|
|
Adhere to the i915_sg_max_segment() limit on the lengths of individual
scatterlist elements, and in doing so split up very large chunks of lmem
into manageable pieces for the dma-mapping backend.
Reported-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
Suggested-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20201202173444.14903-1-chris@chris-wilson.co.uk
|
|
Block sizes are only limited by the largest power-of-two that will fit
in the region size, but to construct an object we also require feeding
it into an sg list, where the upper limit of the sg entry is at most
UINT_MAX. Therefore to prevent issues with allocating blocks that are
too large, add the flag I915_ALLOC_MAX_SEGMENT_SIZE which should limit
block sizes to the i915_sg_segment_size().
v2: (matt)
- query the max segment.
- prefer flag to limit block size to 4G, since it's best not to assume
the user will feed the blocks into an sg list.
- simple selftest so we don't have to guess.
Cc: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: CQ Tang <cq.tang@intel.com>
Signed-off-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20201130134721.54457-1-matthew.auld@intel.com
|
|
Small objects that only occupy a single page are naturally contiguous,
so mark them as such and allow them the special abilities that come with
it.
A more thorough treatment would extend i915_gem_object_pin_map() to
support discontiguous lmem objects, following the example of
ioremap_prot() and use get_vm_area() + remap_io_sg().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200101220736.1073007-1-chris@chris-wilson.co.uk
|
|
Fixes coccicheck warning:
drivers/gpu/drm/i915/gem/i915_gem_region.c:88:2-3: Unneeded semicolon
drivers/gpu/drm/i915/gvt/gtt.c:1285:2-3: Unneeded semicolon
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: zhengbin <zhengbin13@huawei.com>
Acked-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/1576467845-60920-1-git-send-email-zhengbin13@huawei.com
|
|
Convert shmem to an intel_memory_region.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191018090751.28295-2-matthew.auld@intel.com
|
|
Volatile objects are marked as DONTNEED while pinned, therefore once
unpinned the backing store can be discarded. This is limited to kernel
internal objects.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: CQ Tang <cq.tang@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008160116.18379-4-matthew.auld@intel.com
|
|
Some kernel internal objects may need to be allocated as a contiguous
block, also thinking ahead the various kernel io_mapping interfaces seem
to expect it, although this is purely a limitation in the kernel
API...so perhaps something to be improved.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Cc: Michael J Ruhl <michael.j.ruhl@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008160116.18379-3-matthew.auld@intel.com
|
|
Support memory regions, as defined by a given (start, end), and allow
creating GEM objects which are backed by said region. The immediate goal
here is to have something to represent our device memory, but later on
we also want to represent every memory domain with a region, so stolen,
shmem, and of course device. At some point we are probably going to want
use a common struct here, such that we are better aligned with say TTM.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20191008160116.18379-2-matthew.auld@intel.com
|