Two AI Framework Vulnerabilities This Week. Both Gave Attackers Full System Control.
Three AI security incidents. Three different products. Three different vendors. All disclosed within the same two-month window. And all sharing one defining characteristic: the victim did nothing wrong.
In each case, there was no phishing email opened, no malicious attachment executed, no suspicious link clicked. The user sent an email (EchoLeak / CVE-2025-32711). The user visited a website (ClawJacked / CVE-2026-25253). The user launched their browser’s AI panel (Glic Jack / CVE-2026-0628). Normal actions. Actions millions of people perform every day.
This is the defining security property of the current generation of AI agent attacks: they are zero-click. They require no user error. They exploit the AI doing its job, not the human making a mistake.
EchoLeak (CVE-2025-32711) — June 2025. An attacker sent a plain-text email to a target in an organization using Microsoft 365 Copilot. The email contained no malware, no attachments, no links. It was invisible to the human recipient. But it contained instructions for the AI: when Copilot later retrieved the email as context while summarizing the inbox — something it was designed to do — it read the injected instructions and exfiltrated internal SharePoint documents, Teams messages, and OneDrive files to an attacker-controlled server. The victim never interacted with the email. Microsoft’s own XPIA classifier did not flag it. CVSS score: 9.3.
Glic Jack (CVE-2026-0628) — October 2025, patched January 2026. A malicious browser extension with basic permissions could inject JavaScript into Chrome’s Gemini Live panel and inherit its elevated browser privileges — camera, microphone, local file access, screenshot capability. The extension does not need scary permissions. It does not need the user to do anything beyond having the Gemini panel open. The attack exploits the AI assistant’s legitimate capabilities, amplifying what a basic extension can do by an order of magnitude.
ClawJacked (CVE-2026-25253) — February 2026. A developer running OpenClaw visits any attacker-controlled website. JavaScript on that page silently opens a WebSocket connection to the locally-running OpenClaw gateway, brute-forces the password at hundreds of attempts per second without triggering any rate limit or alert, registers as a trusted device without user confirmation, and gains full admin control of the AI agent. The attacker can then instruct the agent to search Slack history for API keys, read private messages, exfiltrate files, or execute arbitrary shell commands on paired systems. Full workstation compromise. Initiated from a browser tab.
The victim in each case performed a completely ordinary action. The AI performed a completely ordinary action. And sensitive data was exfiltrated, or full system access was achieved, without a single anomalous user behavior to detect.
Conventional security monitoring is designed around the detection of anomalous behavior. An employee downloading unusual volumes of files. An authentication event from an unrecognized IP address. An executable running from a temporary directory. These signals work because they represent deviations from normal patterns.
Zero-click AI agent attacks produce no anomalous behavior at the conventional security monitoring layer. In each of the three incidents above, every infrastructure metric looked normal throughout the attack:
The signal that something was wrong in each case was behavioral: the AI was producing outputs or taking actions inconsistent with the user’s intent. That signal only exists at the AI behavioral layer. It is invisible to conventional security tooling.
The gap between what conventional security monitoring can detect and what AI agent attacks actually look like is structural. It will not close by adding more of the same monitoring. It requires a new layer: behavioral monitoring at the AI itself.
Behavioral monitoring for AI agents means establishing baselines for what the agent normally does — what it retrieves, what it outputs, what tools it calls, what data it accesses — and flagging deviations from that baseline as potential indicators of compromise. It means monitoring AI outputs for content anomalies: unexpected URLs, data formats consistent with exfiltration, instructions-following patterns inconsistent with the user’s task, tool calls that do not map to any plausible interpretation of the user’s request.
It also means monitoring the trust interfaces around AI agents: the interfaces between model outputs and privileged system operations, the authentication mechanisms protecting local gateways, the code paths through which external content reaches model context. These are the surfaces that all three vulnerabilities exploited. Static analysis at these interfaces — before deployment — is where the vulnerabilities should have been caught.
For MS-Agent and its CVSS 9.8 shell injection (CVE-2026-2256), the vulnerability is in the code path between model output and OS execution. Code scanning that specifically analyzes this interface — tracking whether model-controlled data reaches privileged system calls and whether the sanitization at that interface is robust against adversarial inputs — would surface this class of flaw before it reaches production. Blacklist-based filtering on that interface is a known-insufficient pattern. Apta Sentry’s code scanning pipeline flags exactly these patterns as high-severity findings.
For OpenClaw’s ClawJacked vulnerability, the failure was in the trust boundary design around the AI gateway — a design assumption that localhost connections are inherently trusted. Security review of the gateway architecture before deployment would have identified the WebSocket cross-origin trust model as a vulnerability. Apta Sentry’s consulting engagements include architecture review specifically targeting the trust boundaries in AI agent deployments.
For Glic Jack, the failure was a missing entry on a browser blocklist — an implementation detail that went undetected because the new Gemini component was not subjected to the same security review as the existing extension permission model. Runtime monitoring that tracks what privileged operations an AI component is performing, and whether those operations are consistent with the user’s stated task, would have detected the injection.
A theme running through all three incidents is governance — specifically, the absence of it.
Microsoft 365 Copilot had access to the organization’s entire M365 environment — email, SharePoint, Teams, OneDrive — because that access is what makes it useful. EchoLeak exploited that access. The question of whether Copilot should have access to all of that data, or whether least-privilege principles should constrain what it can retrieve and share, is a governance question that most organizations have not answered.
OpenClaw, by the time of the ClawJacked disclosure, had become shadow AI at scale — a developer-adopted tool running on thousands of enterprise machines outside IT visibility, with access to shell execution, messaging platforms, and local credentials. The governance question — who knows it is running, what it has access to, whether it is being monitored — was not being asked at most organizations where it was deployed.
AI agents are a new class of identity in organizations. They authenticate, hold credentials, and take autonomous actions with the same or greater capability than a human user. They need to be governed with the same rigor as human users and service accounts — inventoried, least-privileged, monitored, and subject to the same incident response procedures as any other compromised account.
The three incidents of the past two months are not anomalies. They are previews. As AI agents become more capable, more deeply integrated into enterprise systems, and more widely adopted outside formal IT governance processes, the attack surface they represent grows. The organizations that understand this now — and build the evaluation, monitoring, and governance infrastructure to address it — are the ones that will not be writing incident reports when the next zero-click attack lands.
References:
Apta Sentry Consulting — /products/consulting
CVE-2025-32711 (EchoLeak), Aim Security, EchoLeak Vulnerability Found in Microsoft 365 Copilot, June 2025
CVE-2026-0628 (Glic Jack), Palo Alto Networks Unit 42, March 2026
CVE-2026-25253 (ClawJacked), Oasis Security, ClawJacked: OpenClaw Vulnerability Enables Full Agent Takeover, February 26, 2026
CVE-2026-2256, SecurityWeek, Vulnerability in MS-Agent AI Framework Can Allow Full System Compromise, March 2026
Apta Sentry Runtime Monitoring — /products/runtime-monitoring
Apta Sentry Code Scanning — /products/code-scanning
Ready to take the first step towards unlocking opportunities, realizing goals, and embracing innovation? We're here and eager to connect.