LangChain/LangGraph 七大漏洞爆發:九千萬週下載量的 AI 框架正在洩露你的 API 金鑰 | LangChain & LangGraph: 7 Critical CVEs Threaten 90M Weekly Downloads — API Keys, RCE, SQL Injection
By Kit 小克 | AI Tool Observer | 2026-03-29
🇹🇼 LangChain/LangGraph 七大漏洞爆發:九千萬週下載量的 AI 框架正在洩露你的 API 金鑰
如果你的 AI 應用是用 LangChain 或 LangGraph 建立的,請先停下來看這篇文章。2026 年 3 月 27 日,資安研究人員公開披露了橫跨這兩個框架的七個重大漏洞(CVE),影響週下載量超過 9,200 萬次的生態系——幾乎所有主流 AI 應用開發者都直接或間接受到波及。
這次漏洞有多嚴重?
七個 CVE 涵蓋了幾乎所有最危險的漏洞類型:
- CVE-2025-68664(CVSS 9.3 / Critical):langchain-core Python 版序列化注入漏洞。攻擊者可透過 LLM 回應中的惡意
lc鍵,竊取環境變數中的 API 金鑰與資料庫憑證。受影響版本:< 0.3.81 及 1.0.0–1.2.5。 - CVE-2025-68665(CVSS 8.6 / High):相同類型的漏洞存在於 JavaScript/TypeScript 版的
@langchain/core,受影響版本:< 1.1.8。 - CVE-2026-34070(CVSS 7.5 / High):langchain-core 路徑穿越(Path Traversal),透過
load_prompt()可讀取伺服器任意檔案,受影響版本:< 1.2.22。 - CVE-2025-67644(CVSS 7.3 / High):LangGraph SQLite checkpoint SQL 注入漏洞,可讀取所有對話歷史與 agent 狀態,受影響版本:< 3.0.1。
- CVE-2026-27794(CVSS 6.6 / Medium):LangGraph Pickle 反序列化 RCE,預設啟用的
pickle_fallback=True讓任何可寫入快取的攻擊者都能執行任意程式碼,受影響:langgraph < 1.0.6。 - CVE-2026-28277(CVSS 6.8 / Medium):LangGraph msgpack 不安全反序列化,受影響:langgraph < 1.0.10。
- CVE-2026-27795(CVSS 4.1 / Medium):@langchain/community RecursiveUrlLoader SSRF 繞過,跟隨 HTTP 重新導向可抵達 AWS 內部 metadata 端點,受影響:< 1.1.18。
最危險的攻擊向量:你的 API 金鑰如何被偷走
CVE-2025-68664 的攻擊路徑特別值得關注。由於 LangChain 在反序列化時信任帶有 lc 鍵的物件,攻擊者只需讓你的 agent 收到一個精心構造的 LLM 回應,就能觸發漏洞。更糟的是,早期版本的 secretsFromEnv 預設為 true——這意味著環境變數中的所有機密(OpenAI key、AWS credentials、資料庫連線字串)都可能在一次請求中被外洩。
這個漏洞由資安研究員 Yarden Porat(Cyata Research) 發現,獲得 LangChain 史上最高的 $4,000 漏洞獎勵,最初在 2025 年 12 月透過 Huntr 回報,直到 2026 年 3 月才引發廣泛關注。
立即更新版本對照表
- langchain-core (Python):升級至 0.3.81 或 1.2.22+
- @langchain/core (JS):升級至 1.1.8 或 0.3.80
- langgraph (Python):升級至 1.0.10+
- langgraph-checkpoint-sqlite:升級至 3.0.1+
- @langchain/community (JS):升級至 1.1.18+
這暴露了 AI 框架生態的結構性問題
這次事件不只是「更新一下就好」的問題。LangChain 作為 AI 應用的底層框架,被無數 SaaS 產品、內部工具、開源專案所依賴。許多開發者根本不知道自己的應用底層用了哪個版本的 langchain-core。更深層的問題是:AI 框架在設計時,把「方便呼叫 LLM」放在第一位,卻忽視了 LLM 回應本身就是不可信的外部輸入。序列化信任邊界的崩潰,正是這個設計哲學的後果。
好不好用,試了才知道。但這次,不更新就先別試了。
🇺🇸 LangChain & LangGraph: 7 Critical CVEs Threaten 90M Weekly Downloads — API Keys, RCE, SQL Injection
If your AI application is built on LangChain or LangGraph, stop what you are doing and read this. On March 27, 2026, security researchers publicly disclosed seven CVEs across both frameworks — affecting an ecosystem with over 92 million weekly downloads. Nearly every serious AI app developer is either directly or transitively exposed.
The Vulnerability Landscape
Seven CVEs spanning the most dangerous vulnerability classes in software security:
- CVE-2025-68664 (CVSS 9.3 / Critical): Serialization injection in langchain-core (Python). Attackers can steal API keys and database credentials from environment variables via a crafted LLM response containing a malicious
lckey. Affected: < 0.3.81 and 1.0.0–1.2.5. - CVE-2025-68665 (CVSS 8.6 / High): Same class of flaw in the JavaScript/TypeScript
@langchain/coreSDK. Affected: < 1.1.8. - CVE-2026-34070 (CVSS 7.5 / High): Path traversal in langchain-core via
load_prompt(), enabling arbitrary file reads on the server. Affected: < 1.2.22. - CVE-2025-67644 (CVSS 7.3 / High): SQL injection in LangGraph SQLite checkpoint implementation — all stored conversation history and agent state exposed. Affected: langgraph-checkpoint-sqlite < 3.0.1.
- CVE-2026-27794 (CVSS 6.6 / Medium): Pickle deserialization RCE in LangGraph. The default
pickle_fallback=Truemeans anyone with write access to the cache backend can execute arbitrary code. Affected: langgraph < 1.0.6. - CVE-2026-28277 (CVSS 6.8 / Medium): Unsafe msgpack deserialization in LangGraph checkpointers. Affected: langgraph < 1.0.10.
- CVE-2026-27795 (CVSS 4.1 / Medium): SSRF bypass in
@langchain/communityRecursiveUrlLoader — redirect-following allows access to AWS IMDSv1 metadata endpoints. Affected: < 1.1.18.
How Your API Keys Get Stolen: The CVE-2025-68664 Attack Path
The most severe flaw deserves a clear walkthrough. LangChain's serialization system uses the lc key as a trust signal — if a deserialized object contains it, the framework treats it as a legitimate LangChain object. The problem: LLM responses flow directly into these deserialization paths without sanitization.
An attacker just needs your agent to receive a crafted LLM response. Combined with the old default of secretsFromEnv=true, every environment variable in your runtime — OpenAI keys, AWS credentials, database connection strings — could be exfiltrated in a single inference call. Twelve vulnerable code paths were identified, including astream_events(), astream_log(), and message history systems.
The researcher behind this find, Yarden Porat of Cyata Research, received a $4,000 bounty — the largest in LangChain program history. The flaw was originally reported via Huntr in December 2025, but only gained widespread attention in March 2026.
Patch Reference: What to Upgrade Now
- langchain-core (Python): upgrade to 0.3.81 or 1.2.22+
- @langchain/core (JS/TS): upgrade to 1.1.8 or 0.3.80
- langgraph (Python): upgrade to 1.0.10+
- langgraph-checkpoint-sqlite: upgrade to 3.0.1+
- @langchain/community (JS/TS): upgrade to 1.1.18+
The Deeper Structural Problem
This is not just a patch-and-move-on incident. LangChain underpins hundreds of production AI products, internal tools, and open-source projects. Most developers have no visibility into which version of langchain-core their dependencies are pulling in transitively. More fundamentally, AI frameworks have been designed with LLM convenience as the top priority — treating LLM responses as trusted input rather than the untrusted, attacker-controlled data they actually are. These CVEs are the direct consequence of that design philosophy.
Run pip show langchain-core langgraph right now and check your versions. The good news: patches exist for everything. The bad news: your production deployment may not have them yet.
好不好用,試了才知道。But this time — patch first, try later.
Sources / 資料來源
- LangChain, LangGraph Flaws Expose Files, Secrets, Databases — The Hacker News
- All I Want for Christmas is Your Secrets: CVE-2025-68664 — Cyata Research
- LangChain and LangGraph Vulnerabilities Expose Files, Secrets, and Databases — Vulert
AI 工具觀察站 — 每日精選 AI Agent 與工具趨勢
AI Tool Observer — Daily curated AI Agent & tool trends
留言
張貼留言