It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic

go.theregister.com/feed/www.theregister.com/2025/10/09/its_trivially_easy_to_poison

Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset
Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …

This story appeared on go.theregister.com, 2025-10-09 20:45:14.
The Entire Business World on a Single Page. Free to Use →