PT-2026-27272 · Ggml Org · Llama.Cpp
Published
2026-03-24
·
Updated
2026-03-24
·
CVE-2026-33298
CVSS v3.1
7.8
High
| AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H |
llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the
ggml nbytes function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes ggml nbytes to return a significantly smaller size than required (e.g., 4MB instead of Exabytes), leading to a heap-based buffer overflow when the application subsequently processes the tensor. This vulnerability allows potential Remote Code Execution (RCE) via memory corruption. b7824 contains a fix.Fix
Integer Overflow
Heap Based Buffer Overflow
Found an issue in the description? Have something to add? Feel free to write us 👾
Related Identifiers
Affected Products
Llama.Cpp