PT-2026-29877 · Vllm · Vllm

Published

2026-04-02

·

Updated

2026-04-02

·

CVE-2026-34760

CVSS v3.1

5.9

Medium

AV:N/AC:H/PR:L/UI:N/S:U/C:N/I:H/A:L
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before version 0.18.0, Librosa defaults to using numpy.mean for mono downmixing (to mono), while the international standard ITU-R BS.775-4 specifies a weighted downmixing algorithm. This discrepancy results in inconsistency between audio heard by humans (e.g., through headphones/regular speakers) and audio processed by AI models (Which infra via Librosa, such as vllm, transformer). This issue has been patched in version 0.18.0.

Fix

RCE

Weakness Enumeration

Related Identifiers

CVE-2026-34760

Affected Products

Vllm