CVE-2025-62164

vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.
Configurations

No configuration.

History

21 Nov 2025, 02:15

Type Values Removed Values Added
New CVE

Information

Published : 2025-11-21 02:15

Updated : 2025-11-21 15:13


NVD link : CVE-2025-62164

Mitre link : CVE-2025-62164

CVE.ORG link : CVE-2025-62164


JSON object : View

Products Affected

No product.

CWE
CWE-20

Improper Input Validation

CWE-123

Write-what-where Condition

CWE-502

Deserialization of Untrusted Data

CWE-787

Out-of-bounds Write