25 March 2026
Chicago 12, Melborne City, USA
Curiosity

Major conference catches illicit AI use — and rejects hundreds of papers

Organizers of the 2026 International Conference on Machine Learning (ICML) used a watermarking system to catch the use of AI in peer review of conference papers. Credit: Stephen Barnes/Alamy

A major artificial-intelligence conference has rejected 497 papers — roughly 2% of submissions — whose authors violated AI-use policies in their peer reviews of other articles submitted to the meeting.

The International Conference on Machine Learning (ICML), to be held in Seoul in July, has a reciprocal review policy, meaning that, bar certain exceptions, every paper must have an author who reviews other conference papers. Authors whose reviews violated the conference’s large language model (LLM)-use policy had their papers rejected.

Conference organizers detected the illicit AI use by hiding watermarks in research papers distributed for review. If a researcher used an LLM to generate their peer review, instructions hidden in the watermark prompted the LLM to include telltale phrases in the review text. The presence of these phrases revealed that an AI model had been used to generate the review.

“We hope that by taking strong action against violations of agreed-upon policy we will remind the community that as our field changes rapidly the thing we must protect most actively is our trust in each other,” said the organizers in a blog post on 18 March.

“What the ICML case shows is a research community in need of clear guidance on responsible AI use, including use in peer review,” says Marie Soulière, head of editorial ethics and quality assurance at the publishing company Frontiers in Lausanne, Switzerland.

Many researchers applauded the news of the ICML’s actions on the social-media platform X. Some suggested that other conferences could learn from the ICML or that the meeting could go further, for example by banning the rejected authors from resubmitting.

However, Zhengzhong Tu, a computer scientist at Texas A&M University in College Station, said that the policy would be ineffective. “It will only demotivate all the reviewers,” he said, adding that they will avoid routes banning AI use and will use LLMs “to generate meaningless reviews”.

AI reviewer

Using AI in peer review is now common: more than half of researchers in a 2025 survey from Frontiers have done it, despite journals and conference policies often banning its use. But AI researchers are starkly divided on whether LLMs should have a role in peer review.

This sparked ICML organizers, for the first time, to operate two peer-review streams — one allowing limited LLM use, and the other strictly prohibiting it. Authors and reviewers were allowed to choose their preferred stream.

First Appeared on
Source link

Leave feedback about this

  • Quality
  • Price
  • Service

PROS

+
Add Field

CONS

+
Add Field
Choose Image
Choose Video