A study finds that as few as 250 malicious documents can produce a “backdoor” vulnerability in an LLM, regardless of model size or training data volume

anthropic.com/research/small-samples-poison

This story appeared on anthropic.com, 2025-10-09 17:51:36.494000.
The Entire Business World on a Single Page. Free to Use →