LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks
嗯,用户让我帮忙总结一下这篇文章的内容,控制在100个字以内。首先,我需要仔细阅读文章,抓住主要信息。 文章主要讲的是网络安全研究人员披露了三个影响LangChain和LangGraph的安全漏洞。这些漏洞如果被成功利用,可能会泄露文件系统数据、环境秘密和对话历史。LangChain和LangGraph都是用于构建大型语言模型应用的开源框架。 接下来,文章详细介绍了每个漏洞的情况,包括CVE编号、CVSS评分以及漏洞的具体影响。比如CVE-2026-34070是一个路径遍历漏洞,允许访问任意文件;CVE-2025-68664是反序列化漏洞,泄露API密钥;CVE-2025-67644是SQL注入漏洞。 然后,文章提到这些漏洞已经被修复,并建议用户尽快升级到安全版本。最后,强调了AI基础设施同样面临传统安全威胁的风险。 现在,我需要把这些信息浓缩到100字以内。重点包括:三个漏洞、影响的数据类型、修复措施以及AI安全的重要性。 可能会这样组织语言:“研究人员披露三个影响LangChain和LangGraph的漏洞,可泄露文件系统数据、环境秘密和对话历史。已修复并建议升级版本以确保安全。” 这样既涵盖了主要问题,又简洁明了。 </think> 研究人员披露了三个影响LangChain和LangGraph的漏洞,可导致文件系统数据、环境秘密和对话历史泄露。已修复并建议用户升级至最新版本以确保安全。 2026-3-27 08:7:0 Author: thehackernews.com(查看原文) 阅读量:6 收藏

Vulnerability / Artificial Intelligence

Cybersecurity researchers have disclosed three security vulnerabilities impacting LangChain and LangGraph that, if successfully exploited, could expose filesystem data, environment secrets, and conversation history.

Both LangChain and LangGraph are open-source frameworks that are used to build applications powered by Large Language Models (LLMs). LangGraph is built on the foundations of LangChain for more sophisticated and non-linear agentic workflows. According to statistics on the Python Package Index (PyPI), LangChain, LangChain-Core, and LangGraph have been downloaded more than 52 million, 23 million, and 9 million times last week alone.

"Each vulnerability exposes a different class of enterprise data: filesystem files, environment secrets, and conversation history," Cyera security researcher Vladimir Tokarev said in a report published Thursday.

The issues, in a nutshell, offer three independent paths that an attacker can leverage to drain sensitive data from any enterprise LangChain deployment. Details of the vulnerabilities are as follows -

  • CVE-2026-34070 (CVSS score: 7.5) - A path traversal vulnerability in LangChain ("langchain_core/prompts/loading.py") that allows access to arbitrary files without any validation via its prompt-loading API by supplying a specially crafted prompt template.
  • CVE-2025-68664 (CVSS score: 9.3) - A deserialization of untrusted data vulnerability in LangChain that leaks API keys and environment secrets by passing as input a data structure that tricks the application into interpreting it as an already serialized LangChain object rather than regular user data.
  • CVE-2025-67644 (CVSS score: 7.3) - An SQL injection vulnerability in LangGraph SQLite checkpoint implementation that allows an attacker to manipulate SQL queries through metadata filter keys and run arbitrary SQL queries against the database.

Successful exploitation of the aforementioned flaws could allow an attacker to read sensitive files like Docker configurations, siphon sensitive secrets via prompt injection, and access conversation histories associated with sensitive workflows. It's worth noting that details of CVE-2025-68664 were also shared by Cyata in December 2025, giving it the cryptonym LangGrinch.

The vulnerabilities have been patched in the following versions -

  • CVE-2026-34070 - langchain-core >=1.2.22
  • CVE-2025-68664 - langchain-core 0.3.81 and 1.2.5
  • CVE-2025-67644 - langgraph-checkpoint-sqlite 3.0.1

The findings once again underscore how artificial intelligence (AI) plumbing is not immune to classic security vulnerabilities, potentially putting entire systems at risk.

The development comes days after a critical security flaw impacting Langflow (CVE-2026-33017, CVSS score: 9.3) has come under active exploitation within 20 hours of public disclosure, enabling attackers to exfiltrate sensitive data from developer environments.

Naveen Sunkavally, chief architect at Horizon3.ai, said the vulnerability shares the same root cause as CVE-2025-3248, and stems from unauthenticated endpoints executing arbitrary code. With threat actors moving quickly to exploit newly disclosed flaws, it's essential that users apply the patches as soon as possible for optimal protection.

"LangChain doesn't exist in isolation. It sits at the center of a massive dependency web that stretches across the AI stack. Hundreds of libraries wrap LangChain, extend it, or depend on it," Cyera said. "When a vulnerability exists in LangChain’s core, it doesn’t just affect direct users. It ripples outward through every downstream library, every wrapper, every integration that inherits the vulnerable code path."

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2026/03/langchain-langgraph-flaws-expose-files.html
如有侵权请联系:admin#unsafe.sh