summaryrefslogtreecommitdiff
path: root/rust/helpers/regulator.c
diff options
context:
space:
mode:
authorYu Kuai <yukuai3@huawei.com>2025-08-07 11:24:12 +0800
committerJens Axboe <axboe@kernel.dk>2025-08-07 06:30:17 -0600
commit42e6c6ce03fd3e41e39a0f93f9b1a1d9fa664338 (patch)
treeb58739caedd28044ba06837ddb6461c2fc3e73dc /rust/helpers/regulator.c
parent80f21806b8e34ae1e24c0fc6a0f0dfd9b055e130 (diff)
lib/sbitmap: convert shallow_depth from one word to the whole sbitmap
Currently elevators will record internal 'async_depth' to throttle asynchronous requests, and they both calculate shallow_dpeth based on sb->shift, with the respect that sb->shift is the available tags in one word. However, sb->shift is not the availbale tags in the last word, see __map_depth: if (index == sb->map_nr - 1) return sb->depth - (index << sb->shift); For consequence, if the last word is used, more tags can be get than expected, for example, assume nr_requests=256 and there are four words, in the worst case if user set nr_requests=32, then the first word is the last word, and still use bits per word, which is 64, to calculate async_depth is wrong. One the ohter hand, due to cgroup qos, bfq can allow only one request to be allocated, and set shallow_dpeth=1 will still allow the number of words request to be allocated. Fix this problems by using shallow_depth to the whole sbitmap instead of per word, also change kyber, mq-deadline and bfq to follow this, a new helper __map_depth_with_shallow() is introduced to calculate available bits in each word. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Link: https://lore.kernel.org/r/20250807032413.1469456-2-yukuai1@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'rust/helpers/regulator.c')
0 files changed, 0 insertions, 0 deletions