PT-2026-27707 · Linux · Linux

Published

2026-03-25

·

Updated

2026-03-25

·

CVE-2026-23342

None

No severity ratings or metrics are available. When they are, we'll update the corresponding info on the page.
In the Linux kernel, the following vulnerability has been resolved:
bpf: Fix race in cpumap on PREEMPT RT
On PREEMPT RT kernels, the per-CPU xdp bulk queue (bq) can be accessed concurrently by multiple preemptible tasks on the same CPU.
The original code assumes bq enqueue() and cpu map flush() run atomically with respect to each other on the same CPU, relying on local bh disable() to prevent preemption. However, on PREEMPT RT, local bh disable() only calls migrate disable() (when PREEMPT RT NEEDS BH LOCK is not set) and does not disable preemption, which allows CFS scheduling to preempt a task during bq flush to queue(), enabling another task on the same CPU to enter bq enqueue() and operate on the same per-CPU bq concurrently.
This leads to several races:
  1. Double list del clearprev(): after bq->count is reset in bq flush to queue(), a preempting task can call bq enqueue() -> bq flush to queue() on the same bq when bq->count reaches CPU MAP BULK SIZE. Both tasks then call list del clearprev() on the same bq->flush node, the second call dereferences the prev pointer that was already set to NULL by the first.
  2. bq->count and bq->q[] races: concurrent bq enqueue() can corrupt the packet queue while bq flush to queue() is processing it.
The race between task A ( cpu map flush -> bq flush to queue) and task B (bq enqueue -> bq flush to queue) on the same CPU:
Task A (xdp do flush) Task B (cpu map enqueue)

bq flush to queue(bq) spin lock(&q->producer lock) /* flush bq->q[] to ptr ring / bq->count = 0 spin unlock(&q->producer lock) bq enqueue(rcpu, xdpf) <-- CFS preempts Task A --> bq->q[bq->count++] = xdpf / ... more enqueues until full ... / bq flush to queue(bq) spin lock(&q->producer lock) / flush to ptr ring / spin unlock(&q->producer lock) list del clearprev(flush node) / sets flush node.prev = NULL / <-- Task A resumes --> list del clearprev(flush node) flush node.prev->next = ... / prev is NULL -> kernel oops */
Fix this by adding a local lock t to xdp bulk queue and acquiring it in bq enqueue() and cpu map flush(). These paths already run under local bh disable(), so use local lock nested bh() which on non-RT is a pure annotation with no overhead, and on PREEMPT RT provides a per-CPU sleeping lock that serializes access to the bq.
To reproduce, insert an mdelay(100) between bq->count = 0 and list del clearprev() in bq flush to queue(), then run reproducer provided by syzkaller.

Related Identifiers

CVE-2026-23342

Affected Products

Linux