vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
History

Fri, 05 Dec 2025 00:15:00 +0000

Type Values Removed Values Added
References
Metrics threat_severity

None

threat_severity

Important


Wed, 03 Dec 2025 18:00:00 +0000

Type Values Removed Values Added
First Time appeared Vllm
Vllm vllm
CPEs cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*
Vendors & Products Vllm
Vllm vllm

Tue, 02 Dec 2025 15:15:00 +0000

Type Values Removed Values Added
Metrics ssvc

{'options': {'Automatable': 'yes', 'Exploitation': 'none', 'Technical Impact': 'total'}, 'version': '2.0.3'}


Tue, 02 Dec 2025 12:30:00 +0000

Type Values Removed Values Added
First Time appeared Vllm-project
Vllm-project vllm
Vendors & Products Vllm-project
Vllm-project vllm

Mon, 01 Dec 2025 23:00:00 +0000

Type Values Removed Values Added
Description vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
Title vLLM vulnerable to remote code execution via transformers_utils/get_config
Weaknesses CWE-94
References
Metrics cvssV3_1

{'score': 7.1, 'vector': 'CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:U/C:H/I:H/A:H'}


cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published: 2025-12-01T22:45:42.566Z

Updated: 2025-12-02T14:14:58.324Z

Reserved: 2025-12-01T18:22:06.865Z

Link: CVE-2025-66448

cve-icon Vulnrichment

Updated: 2025-12-02T14:14:54.375Z

cve-icon NVD

Status : Analyzed

Published: 2025-12-01T23:15:54.213

Modified: 2025-12-03T17:52:26.280

Link: CVE-2025-66448

cve-icon Redhat

Severity : Important

Publid Date: 2025-12-01T22:45:42Z

Links: CVE-2025-66448 - Bugzilla