diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2024-11-30 15:43:02 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2024-11-30 15:43:02 -0800 |
commit | dd54fcced81d479d77acbeb4eea74b9ab9276bff (patch) | |
tree | 848633b6e3c56138cf8ab68a74f15f4f7878f51a /io_uring/memmap.c | |
parent | 133577cad6bf48e5a7848c4338124081393bfe8a (diff) | |
parent | 7eb75ce7527129d7f1fee6951566af409a37a1c4 (diff) |
Merge tag 'io_uring-6.13-20242901' of git://git.kernel.dk/linux
Pull more io_uring updates from Jens Axboe:
- Remove a leftover struct from when the cqwait registered waiting was
transitioned to regions.
- Fix for an issue introduced in this merge window, where nop->fd might
be used uninitialized. Ensure it's always set.
- Add capping of the task_work run in local task_work mode, to prevent
bursty and long chains from adding too much latency.
- Work around xa_store() leaving ->head non-NULL if it encounters an
allocation error during storing. Just a debug trigger, and can go
away once xa_store() behaves in a more expected way for this
condition. Not a major thing as it basically requires fault injection
to trigger it.
- Fix a few mapping corner cases
- Fix KCSAN complaint on reading the table size post unlock. Again not
a "real" issue, but it's easy to silence by just keeping the reading
inside the lock that protects it.
* tag 'io_uring-6.13-20242901' of git://git.kernel.dk/linux:
io_uring/tctx: work around xa_store() allocation error issue
io_uring: fix corner case forgetting to vunmap
io_uring: fix task_work cap overshooting
io_uring: check for overflows in io_pin_pages
io_uring/nop: ensure nop->fd is always initialized
io_uring: limit local tw done
io_uring: add io_local_work_pending()
io_uring/region: return negative -E2BIG in io_create_region()
io_uring: protect register tracing
io_uring: remove io_uring_cqwait_reg_arg
Diffstat (limited to 'io_uring/memmap.c')
-rw-r--r-- | io_uring/memmap.c | 13 |
1 files changed, 10 insertions, 3 deletions
diff --git a/io_uring/memmap.c b/io_uring/memmap.c index 6e6ee79ba94fc..57de9bccbf508 100644 --- a/io_uring/memmap.c +++ b/io_uring/memmap.c @@ -73,6 +73,8 @@ void *io_pages_map(struct page ***out_pages, unsigned short *npages, ret = io_mem_alloc_compound(pages, nr_pages, size, gfp); if (!IS_ERR(ret)) goto done; + if (nr_pages == 1) + goto fail; ret = io_mem_alloc_single(pages, nr_pages, size, gfp); if (!IS_ERR(ret)) { @@ -81,7 +83,7 @@ done: *npages = nr_pages; return ret; } - +fail: kvfree(pages); *out_pages = NULL; *npages = 0; @@ -136,7 +138,12 @@ struct page **io_pin_pages(unsigned long uaddr, unsigned long len, int *npages) struct page **pages; int ret; - end = (uaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT; + if (check_add_overflow(uaddr, len, &end)) + return ERR_PTR(-EOVERFLOW); + if (check_add_overflow(end, PAGE_SIZE - 1, &end)) + return ERR_PTR(-EOVERFLOW); + + end = end >> PAGE_SHIFT; start = uaddr >> PAGE_SHIFT; nr_pages = end - start; if (WARN_ON_ONCE(!nr_pages)) @@ -229,7 +236,7 @@ int io_create_region(struct io_ring_ctx *ctx, struct io_mapped_region *mr, if (!reg->size || reg->mmap_offset || reg->id) return -EINVAL; if ((reg->size >> PAGE_SHIFT) > INT_MAX) - return E2BIG; + return -E2BIG; if ((reg->user_addr | reg->size) & ~PAGE_MASK) return -EINVAL; if (check_add_overflow(reg->user_addr, reg->size, &end)) |