PT-2026-27659 · Linux · Linux
Published
2026-03-25
·
Updated
2026-03-25
·
CVE-2026-23294
None
No severity ratings or metrics are available. When they are, we'll update the corresponding info on the page.
In the Linux kernel, the following vulnerability has been resolved:
bpf: Fix race in devmap on PREEMPT RT
On PREEMPT RT kernels, the per-CPU xdp dev bulk queue (bq) can be
accessed concurrently by multiple preemptible tasks on the same CPU.
The original code assumes bq enqueue() and dev flush() run atomically
with respect to each other on the same CPU, relying on
local bh disable() to prevent preemption. However, on PREEMPT RT,
local bh disable() only calls migrate disable() (when
PREEMPT RT NEEDS BH LOCK is not set) and does not disable
preemption, which allows CFS scheduling to preempt a task during
bq xmit all(), enabling another task on the same CPU to enter
bq enqueue() and operate on the same per-CPU bq concurrently.
This leads to several races:
-
Double-free / use-after-free on bq->q[]: bq xmit all() snapshots cnt = bq->count, then iterates bq->q[0..cnt-1] to transmit frames. If preempted after the snapshot, a second task can call bq enqueue() -> bq xmit all() on the same bq, transmitting (and freeing) the same frames. When the first task resumes, it operates on stale pointers in bq->q[], causing use-after-free.
-
bq->count and bq->q[] corruption: concurrent bq enqueue() modifying bq->count and bq->q[] while bq xmit all() is reading them.
-
dev rx/xdp prog teardown race: dev flush() clears bq->dev rx and bq->xdp prog after bq xmit all(). If preempted between bq xmit all() return and bq->dev rx = NULL, a preempting bq enqueue() sees dev rx still set (non-NULL), skips adding bq to the flush list, and enqueues a frame. When dev flush() resumes, it clears dev rx and removes bq from the flush list, orphaning the newly enqueued frame.
-
list del clearprev() on flush node: similar to the cpumap race, both tasks can call list del clearprev() on the same flush node, the second dereferences the prev pointer already set to NULL.
The race between task A ( dev flush -> bq xmit all) and task B
(bq enqueue -> bq xmit all) on the same CPU:
Task A (xdp do flush) Task B (ndo xdp xmit redirect)
dev flush(flush list)
bq xmit all(bq)
cnt = bq->count /* e.g. 16 /
/ start iterating bq->q[] /
<-- CFS preempts Task A -->
bq enqueue(dev, xdpf)
bq->count == DEV MAP BULK SIZE
bq xmit all(bq, 0)
cnt = bq->count / same 16! /
ndo xdp xmit(bq->q[])
/ frames freed by driver /
bq->count = 0
<-- Task A resumes -->
ndo xdp xmit(bq->q[])
/ use-after-free: frames already freed! */
Fix this by adding a local lock t to xdp dev bulk queue and acquiring
it in bq enqueue() and dev flush(). These paths already run under
local bh disable(), so use local lock nested bh() which on non-RT is
a pure annotation with no overhead, and on PREEMPT RT provides a
per-CPU sleeping lock that serializes access to the bq.
Found an issue in the description? Have something to add? Feel free to write us 👾
Related Identifiers
Affected Products
Linux