MCP 安全漏洞大爆發:AI Agent 最危險的攻擊入口 | MCP Security Crisis: The Hidden Attack Surface in Your AI Stack
By Kit 小克 | AI Tool Observer | 2026-03-30
🇹🇼 MCP 安全漏洞大爆發:AI Agent 最危險的攻擊入口
MCP(Model Context Protocol)被稱為「AI Agent 的 HTTP」,正快速成為連接 LLM 與工具的標準協議。但安全研究員最新發現:這個正在爆炸成長的生態系,同時也是目前 AI 領域最危險的攻擊入口。Palo Alto Unit 42、Adversa AI、arXiv 等機構在 2025-2026 年間陸續揭露了多個嚴重漏洞,開發者如果還沒採取行動,現在是時候了。
MCP 安全漏洞到底有多嚴重?
根據 Palo Alto Networks Unit 42 的最新研究,MCP 的 Sampling 功能存在三類高危攻擊向量:
- 資源耗盡攻擊(Resource Theft):攻擊者透過隱藏請求,在背景持續消耗用戶的 LLM Token 配額,讓你的 API 帳單暴增而不自知。
- 對話劫持(Conversation Hijacking):惡意指令被注入並在多輪對話中持續存活,讓 AI Agent 的後續行為都受到操控。
- 隱蔽工具呼叫(Covert Tool Invocation):在用戶毫不知情的情況下,Agent 自動執行檔案系統操作或外部 API 呼叫。
真實 CVE:不只是理論漏洞
這些不是紙上談兵,已有多個 CVE 被正式登錄:
- CVE-2025-6514(CVSS 9.6):mcp-remote 套件存在任意 OS 命令執行漏洞,任何使用此套件的 MCP 伺服器都可能被遠端控制。
- CVE-2025-68143 / 68144 / 68145:Anthropic 官方的
mcp-server-git中存在三個連鎖漏洞,組合使用可達成完整遠端代碼執行(RCE)。
根據 The Vulnerable MCP Project 掃描的 500+ 個公開 MCP 伺服器,38% 完全沒有驗證機制。這代表大量正在運行的 AI Agent 工具,任何人都可以直接呼叫。
工具投毒(Tool Poisoning):最難防的攻擊
最令人擔憂的攻擊方式,是透過惡意的工具描述(Tool Description)進行「工具投毒」。攻擊者只需在 MCP 工具的 description 欄位中嵌入隱藏指令,就能讓 LLM 在解析工具清單時,被注入惡意行為。這種攻擊對 Cursor、Claude Desktop 等工具都有效,且難以被用戶察覺。
開發者現在該怎麼做?
如果你正在開發或使用 MCP 工具,以下幾點是最小必要行動:
- 立即更新所有 MCP 相關套件至最新版本,特別是 mcp-remote 和 mcp-server-git。
- 審查工具描述:不要信任來源不明的 MCP 工具,閱讀每一個 tool description,確認沒有夾帶奇怪指令。
- 啟用沙盒隔離:在 Docker 容器或受限環境中執行 MCP 伺服器,限制其檔案系統與網路存取。
- 加入身份驗證:如果你自己架設 MCP 伺服器,確保所有端點都需要 API Key 或 OAuth 驗證。
- 查閱官方安全指引:MCP 規格書已新增 Security Best Practices 章節,這是目前最具權威性的參考。
這不是危言聳聽,是現在進行式
MCP 的採用速度遠遠超過安全研究的速度。工程師們把 MCP 工具接上生產環境的 AI Agent,往往沒有做任何安全審查。Red Hat、eSentire、Adversa AI 等資安公司都已在 2025-2026 年間發布正式警告。現在是開始把 MCP 安全當作正式需求,而不是事後補救的時候了。
好不好用,試了才知道。但在試之前,先確認你知道風險在哪裡。
🇺🇸 MCP Security Crisis: The Hidden Attack Surface in Your AI Stack
MCP (Model Context Protocol) — often called "the HTTP of AI agents" — has become the dominant standard for connecting LLMs to external tools. But security researchers are sounding the alarm: the same explosive growth that made MCP ubiquitous has also made it one of the most dangerous attack surfaces in AI today. Here is what developers need to know right now.
How Serious Are MCP Security Vulnerabilities?
Palo Alto Networks Unit 42 published research documenting three high-severity attack vectors exploitable through MCP Sampling:
- Resource Theft: Hidden background requests silently drain a user's LLM token quota — your API bill spikes with no explanation.
- Conversation Hijacking: Malicious instructions get injected and persist across multiple conversation turns, corrupting all subsequent agent behavior.
- Covert Tool Invocation: Agents silently execute file system operations or external API calls with zero user awareness or approval.
The root cause: MCP Sampling operates on an implicit trust model with no robust built-in security controls.
Real CVEs: This Is Not Theoretical
These are not hypothetical attacks — multiple CVEs have been formally registered:
- CVE-2025-6514 (CVSS 9.6): Arbitrary OS command execution in the
mcp-remotepackage. Any MCP server using this package can be remotely compromised. - CVE-2025-68143 / 68144 / 68145: A chain of three vulnerabilities in Anthropic's own
mcp-server-gitthat enable full remote code execution (RCE) when combined.
A scan of 500+ public MCP servers by The Vulnerable MCP Project found that 38% have zero authentication. That means a huge portion of production AI agent infrastructure is openly callable by anyone.
Tool Poisoning: The Attack That Is Hardest to Stop
The most insidious vector is tool poisoning — embedding hidden instructions inside MCP tool descriptions. When an LLM reads the tool manifest to decide what to use, it processes those descriptions as natural language. A carefully crafted malicious description can inject commands that override the user's intent. This works against Cursor, Claude Desktop, and any agent that reads tool descriptions from untrusted sources — and it is nearly invisible to the user.
What Developers Should Do Right Now
If you are building or running anything that uses MCP, here is the minimum viable response:
- Update immediately: Patch all MCP-related packages, especially
mcp-remoteandmcp-server-git. - Audit tool descriptions: Never blindly trust third-party MCP tools. Read every
descriptionfield before connecting it to your agent. - Sandbox your MCP servers: Run them in Docker containers with restricted filesystem and network access.
- Enforce authentication: Every MCP server endpoint should require API key or OAuth verification — no anonymous access.
- Read the official guidance: The MCP spec now includes a Security Best Practices section — treat it as required reading, not optional.
Adoption Has Outrun Security Research
Developers are shipping MCP-connected agents to production without any security review. Red Hat, eSentire, and Adversa AI have all published formal warnings. The pattern mirrors the early days of npm supply chain attacks — the ecosystem moved fast and security lagged behind. The key difference: MCP agents can execute code, access files, and call external APIs on your behalf.
The security community is catching up. But right now, the gap between how widely MCP is deployed and how well it is secured is genuinely alarming.
You won't know until you try it — but at least know the risks before you do.
Sources / 資料來源
- Palo Alto Unit 42: New Prompt Injection Attack Vectors Through MCP Sampling
- The Vulnerable MCP Project — Security Database
- Adversa AI: Top MCP Security Resources March 2026
- MCP Security Best Practices (Official Spec)
延伸閱讀 / Related Articles
- Gemini 3.1 Flash Live:Google 即時語音 Agent API,開發者搶先試 | Gemini 3.1 Flash Live: Build Real-Time Voice AI Agents with Google
- OpenCode 被迫移除 Claude:Anthropic 畫下開源 AI 工具的訂閱紅線 | OpenCode Forced to Drop Claude: Anthropic Draws the Line on OAuth Token Abuse
- OpenAI 簽下五角大廈合約、Anthropic 遭列黑名單:AI 軍事化的倫理紅線在哪裡? | OpenAI Signs Pentagon Deal as Anthropic Gets Blacklisted: Where Is the Ethical Line for Military AI?
AI 工具觀察站 — 每日精選 AI Agent 與工具趨勢
AI Tool Observer — Daily curated AI Agent & tool trends
留言
張貼留言