Google detected and blocked for the first time a zero-day exploit that, according to its researchers, was developed with the help of artificial intelligence. The Google Threat Intelligence Group (GTIG) identified that prominent cybercrime actors were preparing a massive attack using this vulnerability.
The goal was to bypass two-factor authentication in an open-source web systems management tool, without publicly identifying it. Google's specialists found clear indications in the code that AI was involved in its creation.
Among the most striking evidence was a "hallucinated" CVSS score, something typical of language model responses, along with a structured and manual format that matches training data from LLMs.
The exploit took advantage of a high-level semantic logic flaw, where developers had hardcoded a trust assumption in the platform's 2FA system.

First documented case by Google
This case marks a milestone because it is the first time Google has recorded evidence that AI was used to create an attack of this kind. Researchers clarified that they do not believe the Gemini model was the one used. Nevertheless, they managed to interrupt the exploit before it was deployed on a large scale.
The report also warns that cybercriminals are increasingly turning to artificial intelligence both to discover and exploit security vulnerabilities. This comes after several weeks of concern over the capabilities of cybersecurity models and vulnerabilities in Linux discovered with AI.
Hackers are going further and are now also directly attacking AI systems, as detailed by GTIG. They observed that adversaries target integrated components that provide utility to AIs, such as autonomous capabilities and third-party data connectors.
One of the techniques they mention is "persona-driven jailbreaking," where the model is instructed to act as a security expert to find vulnerabilities. Additionally, they feed AIs with complete repositories of vulnerability data and use tools like OpenClaw to refine AI-generated payloads in controlled environments and improve their reliability before launching them.
This episode reinforces the dual nature of artificial intelligence in the world of cybersecurity: a powerful tool for both defenders and attackers. While companies like Google enhance their detection systems, cybercriminals accelerate their capabilities with the same technology.
Experts expect that such cases will multiply in the near future, forcing software developers to review hardcoded trust assumptions and strengthen code reviews, especially in critical systems management tools.