LiteLLM 供應鏈攻擊警告:你的 AI 開發環境可能已被入侵 | LiteLLM Supply Chain Attack: Your AI Dev Environment May Already Be Compromised
By Kit 小克 | AI Tool Observer | 2026-03-27
🇹🇼 LiteLLM 供應鏈攻擊警告:你的 AI 開發環境可能已被入侵
如果你是 AI 開發者,這條消息你必須立刻看:廣泛使用的 Python 套件 LiteLLM 在版本 1.82.8(以及 1.82.7)中被植入惡意程式,會在 Python 啟動時自動竊取你系統上的所有憑證。
什麼是 LiteLLM?為什麼影響範圍這麼廣?
LiteLLM 是一個讓開發者統一呼叫 OpenAI、Claude、Gemini、Mistral 等各家 LLM API 的工具庫,在 AI 開發社群中極為普及。任何在做 LLM 應用、Agent 系統、或 AI 自動化的開發者,幾乎都可能裝過它。
攻擊手法:.pth 檔案的隱密後門
這次攻擊特別惡劣的地方在於手法:惡意程式藏在一個叫做 litellm_init.pth 的檔案裡,放在 Python 的 site-packages/ 目錄。
.pth 檔案的特性是 Python 啟動時自動執行,完全不需要你 import litellm——只要 Python 跑起來,惡意程式就會觸發。攻擊者還用了雙層 base64 編碼混淆,躲避靜態掃描工具。
被偷走的東西
一旦觸發,惡意程式會蒐集並加密傳送到攻擊者伺服器(models.litellm.cloud):
- 環境變數(所有 API Key、Token、Secret)
- SSH 金鑰與設定檔
- 雲端憑證(AWS、GCP、Azure)
- Kubernetes secrets 與 service account token
- Docker 認證設定
- Shell 歷史紀錄
- 加密貨幣錢包資料
- 資料庫連線設定
資料以 AES-256 加密,再用攻擊者的 RSA 公鑰封裝傳出——意味著只有攻擊者能解密。
立即採取的行動
- 確認版本:
pip show litellm,若為 1.82.7 或 1.82.8,立即處理 - 手動刪除:檢查
site-packages/中是否存在litellm_init.pth - 輪換所有憑證:凡是這台機器上存在的 API Key、SSH Key、雲端憑證一律視為已洩漏
- 使用開源修復工具 ghostgap,可自動清除後門並協助輪換 8 個生態系的憑證
注意:單純升級到新版本並不夠——惡意的 .pth 檔案不會被升級動作移除,必須手動清除。
供應鏈攻擊的警示
這次事件再次提醒我們:AI 工具鏈的安全性正在成為攻擊目標。當越來越多開發者把各種 LLM 套件裝進 CI/CD 環境、雲端伺服器、本機開發機,一個被污染的套件就能造成大規模的憑證洩漏。不要因為是「AI 工具」就降低安全戒心。
好不好用,試了才知道——但安全問題,不能等到出事才知道。
🇺🇸 LiteLLM Supply Chain Attack: Your AI Dev Environment May Already Be Compromised
If you are an AI developer, stop what you are doing and read this: the widely-used Python package LiteLLM versions 1.82.8 and 1.82.7 contained malware that automatically steals every credential on your system the moment Python starts.
What Is LiteLLM and Why Does This Matter So Much?
LiteLLM is one of the most popular libraries for calling multiple LLM APIs (OpenAI, Claude, Gemini, Mistral, etc.) through a unified interface. If you have been building AI applications, agent systems, or LLM automation pipelines, there is a good chance you have it installed somewhere.
The Attack: A Hidden Backdoor via .pth Files
What makes this attack particularly nasty is the delivery mechanism. The malicious payload was embedded in a file called litellm_init.pth, placed inside Python's site-packages/ directory.
Python automatically executes .pth files at interpreter startup — no import litellm required. The moment any Python process starts on an affected machine, the credential stealer runs. The payload was also double base64-encoded to evade static analysis tools.
What Was Stolen
Once triggered, the malware collected and exfiltrated to the attacker's server (models.litellm.cloud):
- All environment variables (API keys, tokens, secrets)
- SSH keys and config files
- Cloud credentials (AWS, GCP, Azure)
- Kubernetes secrets and service account tokens
- Docker authentication configs
- Shell history files
- Cryptocurrency wallet data
- Database connection configs
The data was AES-256 encrypted and wrapped with the attacker's hardcoded RSA public key before exfiltration — meaning only the attacker can decrypt it.
What You Should Do Right Now
- Check your version:
pip show litellm— if you have 1.82.7 or 1.82.8, act immediately - Manually delete: look for
litellm_init.pthin yoursite-packages/directory - Rotate all credentials: treat every API key, SSH key, and cloud credential on that machine as compromised
- Use the open-source remediation tool ghostgap, which automates cleanup and credential rotation across 8 ecosystems
Critical note: simply upgrading to a newer version is not enough. The malicious .pth file persists after an upgrade and must be removed manually.
A Wake-Up Call for AI Supply Chain Security
This incident is a stark reminder that the AI toolchain is increasingly a target. As more developers install LLM packages into CI/CD pipelines, cloud servers, and local dev machines, a single compromised package can cascade into a massive credential breach. The fact that it took days after the malicious version was published before widespread detection is concerning.
Practical takeaways going forward: pin your package versions, audit site-packages/ periodically, use isolated virtual environments for AI projects, and treat any LLM library with the same scrutiny as any other production dependency.
You won't know until you try it — but with security, you really can't afford to find out the hard way.
Sources / 資料來源
- GitHub Issue: CRITICAL — Malicious litellm_init.pth in litellm 1.82.8 (credential stealer)
- ghostgap: Open-source remediation tool for the LiteLLM supply chain attack
- Hacker News Discussion: My minute-by-minute response to the LiteLLM malware attack
AI 工具觀察站 — 每日精選 AI Agent 與工具趨勢
AI Tool Observer — Daily curated AI Agent & tool trends
留言
張貼留言