Google has disclosed the discovery of a previously unknown threat activity involving a suspected AI generated exploit that enables a zero day style bypass of two factor authentication. The activity is believed to be part of a coordinated mass exploitation operation carried out by cybercrime actors working together to identify and weaponize vulnerabilities at scale. According to Google Threat Intelligence Group, the exploit was embedded in a Python script targeting a widely used open source web based system administration tool, allowing attackers to bypass two factor authentication protections when valid user credentials were already available. Google worked with the affected vendor through responsible disclosure to patch the issue, although the exact tool involved was not publicly named.
The analysis suggests that an artificial intelligence model likely assisted in generating the exploit code used in the campaign, marking a notable development in real world abuse of AI for vulnerability discovery and weaponization. Researchers observed that the script carried characteristics typically associated with large language model generated output, including extensive instructional docstrings, structured formatting, and even a fabricated CVSS score. The code also demonstrated what analysts described as a textbook Python style layout, with clean function naming conventions and help menu structures that are commonly produced by AI systems trained on programming documentation. The underlying vulnerability itself was identified as a semantic logic flaw caused by a hard coded trust assumption in the authentication flow, a category of weakness that AI systems are increasingly capable of identifying due to their pattern recognition capabilities in code structures.
Security researchers have highlighted that this development reflects a broader acceleration in attacker capabilities driven by artificial intelligence tools. Google noted that AI systems are increasingly being used not only for analysis but also for direct exploitation workflows, reducing the time required to move from vulnerability discovery to active exploitation. In parallel, new forms of malware are emerging that leverage AI for autonomous decision making and adaptive behavior. One example cited in the broader analysis is PromptSpy, an Android malware family that integrates Gemini API based capabilities to interpret on screen content, manipulate user interfaces, and execute instructions in real time. The malware is capable of analyzing device activity, capturing biometric data such as PIN inputs and gesture patterns, and using overlays to block uninstallation attempts by intercepting user interactions with system buttons.
PromptSpy also demonstrates advanced operational design, including dynamic command and control updates that allow attackers to modify Gemini API keys and relay infrastructure without redeploying the malware payload. This modular structure increases resilience against defensive actions and enables continuous adaptation in compromised environments. Google stated that all related malicious infrastructure has been disabled and no infected applications were found on official app distribution channels, indicating containment of the specific campaign while acknowledging its technical sophistication.
Beyond individual malware cases, researchers are observing a wider expansion of AI assisted cyber operations across multiple state aligned and criminal groups. Activity linked to UNC2814, APT45, APT27, and Russia aligned intrusion sets shows increasing use of AI tools for vulnerability research, automated exploit validation, and operational support functions. Some threat actors are leveraging publicly available datasets such as the WooYun legacy vulnerability archive to fine tune AI systems for code analysis and logic flaw detection. Others are using agent based tools like Hexstrike AI and Strix to automate discovery processes with minimal human intervention. At the same time, underground ecosystems are emerging around shadow API services that provide unauthorized access to premium AI models through proxy infrastructure, enabling large scale misuse while bypassing regional restrictions and security controls. These developments highlight how AI supply chains are becoming integrated into broader cybercrime operations, including campaigns associated with groups such as TeamPCP, which have previously targeted software development pipelines and AI environments for exploitation and data theft.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.