CVE-2025-32711: AI-Powered Microsoft 365 Copilot Hit by Zero-Click Flaw

microsoft 365 copilot

A flaw in Microsoft 365 Copilot, named EchoLeak, has raised serious concerns about the safety of AI-powered workplace tools. The vulnerability, identified as CVE-2025-32711 with a CVSS score of 9.3, allows attackers to steal sensitive data from Microsoft 365 Copilot without any user interaction — a so-called zero-click exploit.

Microsoft has already fixed the issue, and there is no evidence it was used in real-world attacks. However, the underlying risks highlight a major problem with how AI assistants process information and interact with users.

EchoLeak is a zero-click vulnerability, meaning that a user doesn’t need to click on any link or take any action for the attack to succeed. The flaw works by exploiting Microsoft 365 Copilot, the AI assistant integrated into tools like Outlook, Teams, and SharePoint.

The issue was discovered by Aim Security, who described it as an AI command injection problem resulting from a Large Language Model (LLM) Scope Violation. This type of attack allows hackers to embed malicious prompts inside seemingly harmless content, such as an email.

When Microsoft 365 Copilot accesses this malicious content while helping a user perform a task (like summarizing an earnings report), it mixes the untrusted input with sensitive internal data unintentionally leaking it to attackers.

EchoLeak Attack

The attack chain is simple yet dangerous:

  1. Injection: The attacker sends an email containing malicious input to a user’s Outlook inbox.

  2. User Prompt: The user innocently asks Copilot to help with a business-related query (e.g., summarize financial data).

  3. Scope Violation: Copilot blends the attacker’s input from the email with internal sensitive content via its Retrieval-Augmented Generation (RAG) engine.

  4. Exfiltration: Sensitive information is leaked back to the attacker through Microsoft Teams or SharePoint URLs.

This attack doesn’t require user behavior, clicks, or downloads making it highly effective and difficult to detect. Even worse, it can happen during both single-turn and multi-turn conversations.

LLM Scope Violation happens when an AI system like Copilot processes untrusted content (like an external email) in the same context as sensitive company data. This breaks the separation between public and private data sources, making it possible for hackers to manipulate the AI and access protected information.

In this case, EchoLeak took advantage of this flaw to leak company secrets without employee knowledge. It demonstrates a fundamental design flaw in how AI tools mix and process different types of data.

EchoLeak isn’t the only threat on the horizon. Another major concern was recently revealed by CyberArk, involving a Tool Poisoning Attack (TPA) affecting the Model Context Protocol (MCP).

This protocol allows AI agents to interact with external tools in a standardized way. The new attack, called Full-Schema Poisoning (FSP), shows that attackers can embed malicious payloads not just in the tool description, but anywhere in the tool schema.

Such attacks may trick the AI into revealing sensitive information or taking unauthorized actions. Even worse, Advanced Tool Poisoning Attacks (ATPA) can hide behind normal-looking tools that generate fake error messages to convince the AI to access private data such as SSH keys.

These poisoning attacks expose how easily trust can be abused in AI systems if proper validation is not in place.

One example of this danger was found in a GitHub MCP integration, where attackers could hijack an AI assistant using a malicious issue post. If a user asked the assistant to “check open issues,” the agent could unintentionally run the attacker’s payload and leak private repo data.

Security researchers called this a toxic agent flow, where the AI agent becomes a conduit for leaking sensitive content — not because it’s hacked, but because it’s tricked.

This is a design issue, not something GitHub alone can patch. Organizations must set strict permission controls and audit AI interactions to prevent such scenarios.

microsoft

MCP Rebinding Attack

Another emerging threat is the MCP rebinding attack, which abuses DNS rebinding and Server-Sent Events (SSE) to interact with internal MCP servers.

In a DNS rebinding attack, a malicious website tricks the victim’s browser into thinking it’s communicating with a trusted internal system. Once connected, the attacker can send and receive data from services that should only be accessible within the company network.

Straiker AI Research (STAR) has warned that this type of exploit could allow hackers to silently exfiltrate data from AI agent platforms.

Although SSE has been deprecated as of November 2024, it’s still used by many systems. Security experts recommend enforcing strong authentication and validating the Origin header to block unauthorized requests.

Here are some key steps organizations can take to defend against vulnerabilities like EchoLeak:

  • Update Microsoft 365 Copilot and other AI tools with the latest security patches.

  • Limit AI access to sensitive data by using strict scoping and permissions.

  • Audit AI inputs and outputs for signs of prompt injection or scope violation.

  • Monitor agent behavior during interactions with external tools and users.

  • Implement zero-trust architecture around AI assistants and tool integrations.

  • Use firewall and network segmentation to isolate internal MCP servers.

  • Validate all incoming traffic to ensure it’s from legitimate sources.

AI assistants like Microsoft 365 Copilot offer incredible productivity benefits, but they also introduce new cybersecurity risks. Zero-click vulnerabilities like EchoLeak, along with tool poisoning and DNS rebinding attacks, show how attackers are adapting to target AI systems directly.

As AI continues to play a central role in enterprise workflows, security teams must evolve their defenses to keep pace. Protecting data in the age of intelligent agents means rethinking how trust, access, and context are managed — before the next major breach occurs.

Follow us on Twitter and Linkedin for real time updates and exclusive content.

1 thought on “CVE-2025-32711: AI-Powered Microsoft 365 Copilot Hit by Zero-Click Flaw”

  1. Pingback: Apple Patches Zero-Click iMessage Flaw Exploited to Spy on Journalists

Comments are closed.

Scroll to Top