Why did Tesla face a new self-driving probe?
How will the US government shutdown impact markets?
What caused Nvidia stock to surge recently?
Why is China tightening rare earth export controls?
How did Israel-Hamas ceasefire affect oil prices?
What’s behind Goldman Sachs’ strategic confidence?
Why is JPMorgan requiring biometric data for staff?
It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic
go.theregister.com/feed/www.theregister.com/2025/10/09/its_trivially_easy_to_poison
Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset
Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …
This story appeared on go.theregister.com, 2025-10-09 20:45:14.