llama-cpp-python is the Python bindings for llama.cpp. `llama-cpp-python` depends on class `Llama` in `llama.py` to load `.gguf` llama.cpp or Latency Machine Learning Models. The `__init__` constructor built in the `Llama` takes several parameters to configure the loading and running of the model. Other than `NUMA, LoRa settings`, `loading tokenizers,` and `hardware settings`, `__init__` also loads the `chat template` from targeted `.gguf` 's Metadata and furtherly parses it to `llama_chat_format.Jinja2ChatFormatter.to_chat_handler()` to construct the `self.chat_handler` for this model. Nevertheless, `Jinja2ChatFormatter` parse the `chat template` within the Metadate with sandbox-less `jinja2.Environment`, which is furthermore rendered in `__call__` to construct the `prompt` of interaction. This allows `jinja2` Server Side Template Injection which leads to remote code execution by a carefully constructed payload.
Advisories
Source ID Title
Github GHSA Github GHSA GHSA-56xg-wfcc-g829 llama-cpp-python vulnerable to Remote Code Execution by Server-Side Template Injection in Model Metadata
Fixes

Solution

No solution given by the vendor.


Workaround

No workaround given by the vendor.

History

No history.

Projects

Sign in to view the affected projects.

cve-icon MITRE

Status: PUBLISHED

Assigner: GitHub_M

Published:

Updated: 2024-08-02T02:51:10.739Z

Reserved: 2024-05-02T06:36:32.439Z

Link: CVE-2024-34359

cve-icon Vulnrichment

Updated: 2024-08-02T02:51:10.739Z

cve-icon NVD

Status : Awaiting Analysis

Published: 2024-05-14T15:38:45.093

Modified: 2024-11-21T09:18:30.130

Link: CVE-2024-34359

cve-icon Redhat

No data.

cve-icon OpenCVE Enrichment

No data.

Weaknesses