summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGONG Ruiqi <gongruiqi1@huawei.com>2025-02-12 16:15:05 +0800
committerVlastimil Babka <vbabka@suse.cz>2025-03-04 08:53:50 +0100
commit539f552892b757ca7a9eb1ba34f5be3c0a947f59 (patch)
tree05464174707ace0996e355eee18646a235f31ec2
parentf1157db8b539cf1a98678667255fa7efa1f5b2cb (diff)
slab: Achieve better kmalloc caches randomization in kvmalloc
As revealed by this writeup[1], due to the fact that __kmalloc_node (now renamed to __kmalloc_node_noprof) is an exported symbol and will never get inlined, using it in kvmalloc_node (now is __kvmalloc_node_noprof) would make the RET_IP inside always point to the same address: upper_caller kvmalloc kvmalloc_node kvmalloc_node_noprof __kvmalloc_node_noprof <-- all macros all the way down here __kmalloc_node_noprof __do_kmalloc_node(.., _RET_IP_) ... <-- _RET_IP_ points to That literally means all kmalloc invoked via kvmalloc would use the same seed for cache randomization (CONFIG_RANDOM_KMALLOC_CACHES), which makes this hardening non-functional. The root cause of this problem, IMHO, is that using RET_IP only cannot identify the actual allocation site in case of kmalloc being called inside non-inlined wrappers or helper functions. And I believe there could be similar cases in other functions. Nevertheless, I haven't thought of any good solution for this. So for now let's solve this specific case first. For __kvmalloc_node_noprof, replace __kmalloc_node_noprof and call __do_kmalloc_node directly instead, so that RET_IP can take the return address of kvmalloc and differentiate each kvmalloc invocation: upper_caller kvmalloc kvmalloc_node kvmalloc_node_noprof __kvmalloc_node_noprof <-- all macros all the way down here __do_kmalloc_node(.., _RET_IP_) ... <-- _RET_IP_ points to Thanks to Tamás Koczka for the report and discussion! Link: https://github.com/google/security-research/blob/908d59b573960dc0b90adda6f16f7017aca08609/pocs/linux/kernelctf/CVE-2024-27397_mitigation/docs/exploit.md?plain=1#L259 [1] Reported-by: Tamás Koczka <poprdi@google.com> Signed-off-by: GONG Ruiqi <gongruiqi1@huawei.com> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
-rw-r--r--mm/slub.c6
1 files changed, 3 insertions, 3 deletions
diff --git a/mm/slub.c b/mm/slub.c
index abc982d68febc..1f7d1d260eebd 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4925,9 +4925,9 @@ void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
* It doesn't really make sense to fallback to vmalloc for sub page
* requests
*/
- ret = __kmalloc_node_noprof(PASS_BUCKET_PARAMS(size, b),
- kmalloc_gfp_adjust(flags, size),
- node);
+ ret = __do_kmalloc_node(size, PASS_BUCKET_PARAM(b),
+ kmalloc_gfp_adjust(flags, size),
+ node, _RET_IP_);
if (ret || size <= PAGE_SIZE)
return ret;