AI Agents Can Exploit Smart Contracts, Capable of High-Value Attacks

New research by Anthropic has revealed alarming capabilities of AI agents in on-chain attacks. In simulations conducted with compromised smart contracts from 2020 to 2025, models like Claude Opus 4.5, Sonnet 4.5, and GPT-5 replicated vulnerabilities that resulted in a combined loss of approximately $4.6 million. Furthermore, the study revealed two new zero-day vulnerabilities while examining 2,849 contracts with no known weaknesses, and successfully simulated profits. The research underscores that AI-driven attacks have shown significant growth, doubling profitability every ~1.3 months over the past year. This rapid increase in profit suggests that AI technology can now autonomously exploit vulnerabilities for monetary gain.