Scholars sneaking phrases into papers to fool AI reviewers

go.theregister.com/feed/www.theregister.com/2025/07/07/scholars_try_to_fool_llm_reviewers

Using prompt injections to play a Jedi mind trick on LLMs
A handful of international computer science researchers appear to be trying to influence AI reviews with a new class of prompt injection attack.…

This story appeared on go.theregister.com, 2025-07-07 22:03:05.
The Entire Business World on a Single Page. Free to Use →