OpenAI will show how models do on hallucination tests and 'illicit advice'

cnbc.com/2025/05/14/openai-will-now-show-how-models-do-on-hallucination-tests.html

OpenAI on Wednesday announced a new "safety evaluations hub," a webpage where it will publicly display artificial intelligence models' safety results and how they perform on tests for hallucinations, jailbreaks and harmful content, such as "hateful content or illicit advice."
OpenAI said…

This story appeared on cnbc.com, 2025-05-14 19:35:13.
The Entire Business World on a Single Page. Free to Use →