News

Mental health experts say cases of people forming delusional beliefs after hours with AI chatbots are concerning and offer ...
Testing has shown that the chatbot shows a “pattern of apparent distress” when it is being asked to generate harmful content ...
Anthropic rolled out a feature letting its AI assistant terminate chats with abusive users, citing "AI welfare" concerns and ...
Anthropic says the conversations make Claude show ‘apparent distress.’ ...
Anthropic has given its chatbot, Claude, the ability to end conversations it deems harmful. You likely won't encounter the ...
According to the company, this only happens in particularly serious or concerning situations. For example, Claude may choose ...
Anthropic have given the ability to end potentially harmful or dangerous conversations with users to Claude, its AI chatbot.
The Claude AI models Opus 4 and 4.1 will only end harmful conversations in “rare, extreme cases of persistently harmful or ...
Artificial intelligence company Anthropic has revealed new capabilities for some of its newest and largest models. According ...
Scientists have recently made a significant breakthrough in understanding machine personality. Although artificial intelligence systems are evolving quickly, they still have a key limitation: their ...
Bay Area tech companies are building powerful artificial intelligence systems that experts say could pose “catastrophic risks ...
Anthropic launches new 'Learning Modes' for its Claude AI, escalating the back-to-school competition with OpenAI's Study Mode ...