vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.
Advisories
Source ID Title
Github GHSA Github GHSA GHSA-v359-jj2v-j536 vLLM has SSRF Protection Bypass
Fixes

Solution

No solution given by the vendor.


Workaround

No workaround given by the vendor.

History

Mon, 09 Mar 2026 21:15:00 +0000

Type Values Removed Values Added
Description vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.
Title SSRF Protection Bypass in vLLM
Weaknesses CWE-918
References
Metrics cvssV3_1

{'score': 7.1, 'vector': 'CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:L'}


Projects

Sign in to view the affected projects.

cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2026-03-09T21:01:01.827Z

Reserved: 2026-02-09T17:13:54.066Z

Link: CVE-2026-25960

cve-icon Vulnrichment

No data.

cve-icon NVD

Status : Received

Published: 2026-03-09T21:16:15.537

Modified: 2026-03-09T21:16:15.537

Link: CVE-2026-25960

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

No data.

Weaknesses