PT-2026-30180 · Linux · Linux
Published
2026-04-03
·
Updated
2026-04-03
·
CVE-2026-31397
None
No severity ratings or metrics are available. When they are, we'll update the corresponding info on the page.
In the Linux kernel, the following vulnerability has been resolved:
mm/huge memory: fix use of NULL folio in move pages huge pmd()
move pages huge pmd() handles UFFDIO MOVE for both normal THPs and huge
zero pages. For the huge zero page path, src folio is explicitly set to
NULL, and is used as a sentinel to skip folio operations like lock and
rmap.
In the huge zero page branch, src folio is NULL, so folio mk pmd(NULL,
pgprot) passes NULL through folio pfn() and page to pfn(). With
SPARSEMEM VMEMMAP this silently produces a bogus PFN, installing a PMD
pointing to non-existent physical memory. On other memory models it is a
NULL dereference.
Use page folio(src page) to obtain the valid huge zero folio from the
page, which was obtained from pmd page() and remains valid throughout.
After commit d82d09e48219 ("mm/huge memory: mark PMD mappings of the huge
zero folio special"), moved huge zero PMDs must remain special so
vm normal page pmd() continues to treat them as special mappings.
move pages huge pmd() currently reconstructs the destination PMD in the
huge zero page branch, which drops PMD state such as pmd special() on
architectures with CONFIG ARCH HAS PTE SPECIAL. As a result,
vm normal page pmd() can treat the moved huge zero PMD as a normal page
and corrupt its refcount.
Instead of reconstructing the PMD from the folio, derive the destination
entry from src pmdval after pmdp huge clear flush(), then handle the PMD
metadata the same way move huge pmd() does for moved entries by marking it
soft-dirty and clearing uffd-wp.
Found an issue in the description? Have something to add? Feel free to write us 👾
Related Identifiers
Affected Products
Linux