In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method.
References
Link | Resource |
---|---|
https://github.com/hwchase17/langchain/issues/1026 | Issue Tracking |
https://github.com/hwchase17/langchain/issues/814 | Exploit Issue Tracking Patch |
https://github.com/hwchase17/langchain/pull/1119 | Patch |
https://twitter.com/rharang/status/1641899743608463365/photo/1 | Exploit |
Configurations
History
17 Apr 2023, 16:57
Type | Values Removed | Values Added |
---|---|---|
New CVE |
Information
Published : 2023-04-05 02:15
Updated : 2024-02-04 23:37
NVD link : CVE-2023-29374
Mitre link : CVE-2023-29374
CVE.ORG link : CVE-2023-29374
JSON object : View
Products Affected
langchain
- langchain
CWE
CWE-74
Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection')