CVE-2025-48942

vLLM is an inference and serving engine for large language models (LLMs). In versions 0.8.0 up to but excluding 0.9.0, hitting the /v1/completions API with a invalid json_schema as a Guided Param kills the vllm server. This vulnerability is similar GHSA-9hcf-v7m4-6m2j/CVE-2025-48943, but for regex instead of a JSON schema. Version 0.9.0 fixes the issue.
Configurations

Configuration 1 (hide)

cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*

History

24 Jun 2025, 17:44

Type Values Removed Values Added
References () https://github.com/vllm-project/vllm/commit/08bf7840780980c7568c573c70a6a8db94fd45ff - () https://github.com/vllm-project/vllm/commit/08bf7840780980c7568c573c70a6a8db94fd45ff - Patch
References () https://github.com/vllm-project/vllm/issues/17248 - () https://github.com/vllm-project/vllm/issues/17248 - Issue Tracking
References () https://github.com/vllm-project/vllm/pull/17623 - () https://github.com/vllm-project/vllm/pull/17623 - Issue Tracking, Patch
References () https://github.com/vllm-project/vllm/security/advisories/GHSA-6qc9-v4r8-22xg - () https://github.com/vllm-project/vllm/security/advisories/GHSA-6qc9-v4r8-22xg - Exploit, Vendor Advisory
First Time Vllm vllm
Vllm
CPE cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*
Summary
  • (es) vLLM es un motor de inferencia y servicio para modelos de lenguaje grandes (LLM). En las versiones 0.8.0 y 0.9.0, excepto esta, al acceder a la API /v1/completions con un json_schema no válido como parámetro guiado, se desactiva el servidor vllm. Esta vulnerabilidad es similar a la vulnerabilidad GHSA-9hcf-v7m4-6m2j/CVE-2025-48943, pero para expresiones regulares en lugar de un esquema JSON. La versión 0.9.0 corrige el problema.

30 May 2025, 19:15

Type Values Removed Values Added
New CVE

Information

Published : 2025-05-30 19:15

Updated : 2025-06-24 17:44


NVD link : CVE-2025-48942

Mitre link : CVE-2025-48942

CVE.ORG link : CVE-2025-48942


JSON object : View

Products Affected

vllm

  • vllm
CWE
CWE-248

Uncaught Exception