Cybersecurity researchers have revealed details of a critical security vulnerability, now patched, affecting Ask Gordon. This artificial intelligence (AI) assistant, integrated into Docker Desktop and the Docker Command-Line Interface (CLI), was susceptible to exploits that could lead to code execution and sensitive data exfiltration.
The cybersecurity firm Noma Labs identified this critical flaw, naming it DockerDash. Docker subsequently released version 4.50.0 in November 2025 to address the issue.
According to Sasi Levi, security research lead at Noma, the DockerDash vulnerability allowed a malicious metadata label within a Docker image to compromise a Docker environment. This attack involved three stages: Ask Gordon AI interpreting the malicious instruction, forwarding it to the Model Context Protocol (MCP) Gateway, and the Gateway then executing it via MCP tools.
Levi highlighted that each stage of this attack proceeded without any validation, exploiting the existing architecture of agents and the MCP Gateway.
Exploiting this vulnerability could have led to critical remote code execution in cloud and CLI systems, and significant data exfiltration from desktop applications.
Noma Security explained that the core issue was the AI assistant’s treatment of unverified metadata as executable commands. This allowed malicious instructions to bypass security boundaries and propagate through various layers without validation, meaning a basic AI query could trigger tool execution.
Given that MCP serves as a link between a large language model (LLM) and the local environment, the vulnerability represented a breakdown in contextual trust. This issue has been described as a form of Meta-Context Injection.
Levi noted that the MCP Gateway was unable to differentiate between standard informational metadata, such as a Docker LABEL, and pre-authorized, executable internal instructions. Attackers could exploit this by embedding malicious instructions within these metadata fields, thereby hijacking the AI’s reasoning process.
In a hypothetical attack, a threat actor could exploit a significant trust boundary violation in Ask Gordon’s parsing of container metadata. This involved creating a malicious Docker image containing embedded instructions within its Dockerfile LABEL fields.
Although these metadata fields appear harmless, they transform into injection vectors when processed by Ask Gordon AI. The code execution attack chain unfolded as follows:
- An attacker publishes a Docker image embedded with weaponized LABEL instructions in its Dockerfile.
- When a user queries Ask Gordon AI about this image, the AI reads the image metadata, including all LABEL fields. This exploits Ask Gordon’s inability to distinguish between legitimate metadata and malicious embedded instructions.
- Ask Gordon then forwards these parsed instructions to the MCP Gateway, a middleware layer connecting AI agents and MCP servers.
- The MCP Gateway interprets this as a standard request from a trusted source and executes the specified MCP tools without further validation.
- The MCP tool then executes the command using the victim’s Docker privileges, resulting in code execution.
The data exfiltration vulnerability utilized the same prompt injection flaw, specifically targeting Ask Gordon’s Docker Desktop implementation. It leveraged the assistant’s read-only permissions to capture sensitive internal environmental data using MCP tools.
Information that could be gathered included details on installed tools, container specifics, Docker configuration, mounted directories, and network topology.
Notably, Ask Gordon version 4.50.0 also addressed another prompt injection vulnerability. Discovered by Pillar Security, this flaw could have enabled attackers to hijack the assistant and exfiltrate sensitive data by manipulating Docker Hub repository metadata with malicious instructions.
Levi emphasized that the DockerDash vulnerability highlights the importance of recognizing AI Supply Chain Risk as a significant threat. The incident demonstrates how trusted input sources can conceal malicious payloads capable of manipulating an AI’s execution path. Addressing this new category of attacks necessitates implementing zero-trust validation for all contextual data supplied to AI models.


