The revelation of a critical zero-click vulnerability in Microsoft 365 Copilot has exposed fundamental security weaknesses in enterprise AI systems, illustrating how artificial intelligence agents can be weaponized against users without any required interaction. Designated as EchoLeak and tracked under CVE-2025-32711, the flaw carries a critical CVSS score of 9.3, representing one of the most severe AI-related security revelations to date.
The vulnerability operates through an insidious attack mechanism that exploits Copilot’s automatic email scanning capabilities. Attackers craft tailored emails containing hidden command instructions, which Copilot processes during routine background operations without triggering any user alerts or requiring victim participation.
Malicious emails with embedded command instructions bypass user detection through Copilot’s automated background scanning processes.
The AI agent automatically executes these malicious commands, facilitating unauthorized data extraction from across Microsoft 365 services, including Outlook, OneDrive, Office files, SharePoint, and Teams. With zero-day exploits becoming increasingly sophisticated, organizations face mounting challenges in protecting their systems against previously unknown vulnerabilities.
This attack method, classified as AI command injection, capitalizes on indirect prompt injection within Copilot’s processing architecture. The exploit particularly targets the retrieval-augmented generation features that allow Copilot to reference previous conversations and user history.
Once triggered, the vulnerability permits attackers to exfiltrate sensitive corporate data, including confidential Teams messages, emails, private files, and complete chat histories, transmitting this information to external servers without detection.
The zero-click nature distinguishes EchoLeak from traditional phishing attacks, rendering conventional security defenses ineffective. SOC Prime and Aim Security collaborated to provide technical analysis, classifying the vulnerability as an “LLM Scope Violation,” which expands existing definitions of AI agent security boundaries.
The attack demonstrates how enterprise AI integrations create new attack vectors that bypass established data protection measures. Microsoft’s five-month timeline to fully address the vulnerability has been criticized as lengthy for security updates of this severity. Microsoft’s extensive ecosystem, which runs on over 1.4 billion devices globally, amplifies the potential impact of such AI vulnerabilities across enterprise environments worldwide.
Microsoft responded by deploying server-side patches without requiring customer intervention, stating that no evidence of active exploitation preceded the fix. The company confirmed additional defense-in-depth measures are under development to address similar future vulnerabilities.
Security researchers highlight that EchoLeak represents broader systemic risks affecting LLM-based AI agents beyond Microsoft’s ecosystem, raising critical questions about data governance and access controls within enterprise artificial intelligence implementations across the technology sector.