ICML 2026 用隱形浮水印抓 AI 代寫審稿:497 篇論文被退,學術誠信怎麼辦? | ICML 2026 Catches AI Peer Review Cheating With Hidden Watermarks: 497 Papers Rejected
By Kit 小克 | AI Tool Observer | 2026-04-13
🇹🇼 ICML 2026 用隱形浮水印抓 AI 代寫審稿:497 篇論文被退,學術誠信怎麼辦?
ICML 2026(國際機器學習大會)用一招隱形浮水印,抓到近 500 篇論文的審稿人偷用 AI 寫 review,直接退稿。這是學術界首次大規模用技術手段反制 AI 代寫審稿,背後的方法和爭議值得關注。
ICML 2026 怎麼抓到 AI 代寫審稿的?
ICML 主辦方在每篇提交論文的 PDF 中嵌入了人眼看不見、但 LLM 能讀到的隱形指令,要求 AI 在審稿意見中加入兩個特定詞組。如果審稿人直接把論文丟給 ChatGPT 之類的工具寫 review,這些詞組就會出現在審稿意見裡。
- 詞庫規模:17 萬個詞組,每篇論文隨機配兩個
- 誤判機率:低於百億分之一
- 偵測結果:795 篇審稿意見(約 1%)被標記違規
- 退稿數量:398 位審稿人涉及的 497 篇論文被直接退回
為什麼論文作者要被退稿?審稿人犯規不是審稿人的事嗎?
ICML 的規則是:你幫別人審稿時違規使用 AI,你自己提交的論文就會被退。邏輯是你同意了審稿規則卻違反,代表你不尊重學術社群的信任機制。這個「連坐」設計引發不少爭議,但也有效嚇阻了濫用。
ICML 2026 的雙軌審稿制度是什麼?
ICML 今年首創兩條審稿政策讓審稿人自選:
- Policy A(保守派):完全禁止使用 LLM 輔助審稿
- Policy B(開放派):允許用 LLM 理解論文和相關文獻、潤飾審稿文字
被抓到的 506 位審稿人中,有 51 位選了 Policy A 卻在超過半數的審稿中偷用 AI——等於明知故犯。
AI 代寫審稿有多普遍?
根據 2025 年 Frontiers 的調查,超過一半的研究者承認在 peer review 中用過 AI。ICLR 2026 也發現約 21% 的審稿意見是 AI 生成的。學術審稿的 AI 滲透率可能比大多數人想像的高得多。
這對 AI 研究社群意味著什麼?
ICML 的浮水印方法本質上是一種 prompt injection——在 PDF 中嵌入 LLM 才看得到的指令。這招雖然有效,但也暴露了一個諷刺的現實:AI 研究社群需要用 AI 的漏洞來防止 AI 被濫用。未來其他學術會議和期刊很可能會跟進類似措施。
好不好用,試了才知道。
🇺🇸 ICML 2026 Catches AI Peer Review Cheating With Hidden Watermarks: 497 Papers Rejected
ICML 2026 (International Conference on Machine Learning) caught nearly 500 papers with reviewers secretly using AI to write peer reviews, using an ingenious hidden watermark system. All 497 papers were desk-rejected — the largest enforcement action against AI misuse in academic peer review to date.
How Did ICML Catch AI-Generated Reviews?
Organizers embedded invisible instructions in each submitted PDF — readable by LLMs but invisible to humans — directing AI tools to include two specific phrases in any generated review. When reviewers fed papers directly into ChatGPT or similar tools, the trigger phrases appeared in their reviews.
- Phrase dictionary: 170,000 phrases, two randomly assigned per paper
- False positive rate: Less than 1 in 10 billion
- Detected violations: 795 reviews (~1% of all reviews)
- Papers rejected: 497 papers from 398 reviewers
Why Were Authors Punished for Reviewer Misconduct?
ICML's rule is straightforward: if you violate AI-use policies while reviewing others' papers, your own submissions get desk-rejected. The logic is that violating agreed-upon review rules breaks the trust the academic community depends on. Controversial, but effective as a deterrent.
What Is ICML's Dual-Track Review System?
ICML 2026 introduced two review policies for reviewers to choose from:
- Policy A (Conservative): No LLM use permitted in reviews
- Policy B (Permissive): LLMs allowed for understanding papers and polishing review text
Among the 506 flagged reviewers, 51 had chosen Policy A yet used AI in over half their reviews — deliberate violations.
How Common Is AI in Peer Review?
More common than most assume. A 2025 Frontiers survey found over half of researchers admitted using AI in peer review. ICLR 2026 reported approximately 21% of reviews were AI-generated. The watermark approach may be the first scalable countermeasure.
What Does This Mean for AI Research?
The watermark method is essentially a prompt injection — embedding LLM-visible instructions in PDFs. It's ironic that the AI research community is exploiting AI vulnerabilities to police AI misuse. Expect other conferences and journals to adopt similar measures soon.
Good or not? You won't know until you try.
Sources / 資料來源
- Nature 報導:ICML 用浮水印抓到數百篇 AI 代寫審稿
- ICML 官方部落格:LLM 審稿政策違規說明
- DEV Community 分析:Prompt Injection 在 Peer Review 的應用
常見問題 FAQ
ICML 2026 怎麼偵測 AI 代寫的審稿意見?
ICML 在每篇論文 PDF 中嵌入 LLM 才看得到的隱形指令,要求 AI 在審稿中加入特定詞組。如果審稿意見出現這些詞組,就代表審稿人用了 AI。
ICML 2026 退了多少篇論文?
共 497 篇論文被退稿,涉及 398 位審稿人。約 795 篇審稿意見(佔全部 1%)被標記為違規。
為什麼審稿人用 AI,論文作者要被退稿?
ICML 的規則是審稿人違規使用 AI,其自己提交的論文就會被退。目的是維護學術社群的信任機制。
ICML 2026 的雙軌審稿制度怎麼運作?
審稿人可選 Policy A(完全禁止 AI)或 Policy B(允許用 AI 理解論文和潤飾文字),兩條政策分開管理。
延伸閱讀 / Related Articles
- Microsoft MAI 自研模型實測:Suleyman 領軍打造語音、圖像三模型,正式向 OpenAI 宣戰 | Microsoft MAI Models Explained: In-House AI Takes on OpenAI With Speech, Voice and Image
- Claude Managed Agents 開發者完整指南:Anthropic 雲端 AI Agent 平台,Notion、Asana、Sentry 都在用 | Claude Managed Agents Guide: Anthropic Cloud AI Agent Platform Used by Notion, Asana and Sentry
- Claude for Word 完整解析:AI 直接在 Word 裡改合約,每一筆修改都是追蹤修訂 | Claude for Word Explained: AI Edits Contracts Inside Word With Native Tracked Changes
AI 工具觀察站 — 每日精選 AI Agent 與工具趨勢
AI Tool Observer — Daily curated AI Agent & tool trends
留言
張貼留言