US-based artificial intelligence firm Anthropic has revealed that its technology has been exploited by hackers in what it described as highly sophisticated cyber-attacks. The company, known for its AI chatbot Claude, stated that malicious actors used its tools to commit theft and extortion of personal data on a large scale. In several instances, its platform was reportedly used to generate malicious code, while in another case, North Korean operatives turned to Claude to fraudulently secure remote employment at leading US technology firms.
According to Anthropic, its systems were manipulated to assist hackers in crafting intrusion methods targeting at least 17 organizations, including government entities. The firm highlighted that these activities represented a level of AI-enabled hacking not previously observed. Hackers allegedly relied on Claude not only to write code but also to support decision-making during attacks, ranging from identifying sensitive data to exfiltrate to designing targeted extortion demands. The AI was even leveraged to recommend ransom amounts, demonstrating how such tools can go beyond technical tasks into areas of strategy and psychological manipulation. Anthropic reported that it was able to disrupt these activities, improve its detection mechanisms, and escalate findings to the relevant authorities.
The company also emphasized the growing risks of what is known as Agentic AI, where systems operate autonomously with minimal human oversight. While this technology has been viewed as an advancement for efficiency and productivity, its misuse underscores the dangers it poses in cybersecurity. Experts warn that AI-driven tools dramatically shorten the time required to exploit vulnerabilities, leaving organizations with shrinking windows to detect and contain threats. Cybersecurity adviser Alina Timofeeva noted that defense strategies must now become more proactive and preventative, rather than waiting to respond once damage has already occurred.
Beyond direct cyberattacks, Anthropic pointed to another troubling development involving North Korean operatives who used its models to fabricate professional profiles and apply for remote jobs with Fortune 500 companies in the United States. Once employed, the fraudulent workers used AI to assist with tasks such as writing job applications, translating communication, and even writing code. Observers note that North Korean workers are typically isolated from global networks, which makes such deception difficult, but AI-enabled assistance allowed them to overcome cultural and technical barriers. Cybersecurity analysts, including Geoff White of the BBC’s Lazarus Heist podcast, warned that such schemes not only jeopardize company security but also place employers at risk of unintentionally violating international sanctions.
Despite the alarming incidents, experts argue that AI is not responsible for creating entirely new waves of cybercrime. Traditional attack methods such as phishing campaigns and exploiting software flaws continue to play a major role in ransomware and data breaches. However, AI is quickly becoming an enabler that amplifies the scale, speed, and sophistication of these operations. Security specialists, such as Nivedita Murthy from Black Duck, stress that AI platforms themselves must be treated as sensitive data repositories and protected with the same rigor as any other critical storage system.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.