Anthropic Restricts Mythos AI Model Over Cybersecurity Exploitation Risks
Anthropic limits access to its Mythos AI model due to concerns over its ability to discover and exploit critical vulnerabilities in major systems.
Anthropic limits access to its Mythos AI model due to concerns over its ability to discover and exploit critical vulnerabilities in major systems.
Anthropic scrambles to remove leaked Claude Code from GitHub using copyright takedowns, as programmers convert the code into alternative scripting languages to bypass restrictions.
Anthropic’s Claude AI unveils a new code security tool that scans GitHub repositories and generates patches, triggering a 10 billion dollar drop in cybersecurity stocks.
Anthropic introduces Claude Code Security in limited research preview, enabling teams to detect complex software vulnerabilities and generate suggested patches using advanced AI reasoning.
US AI company Anthropic says hackers exploited its Claude chatbot to launch cyber attacks, commit data theft, and aid North Korean scammers in job fraud schemes.