Anthropic Patches Chained Flaws in AI Git Server

The very artificial intelligence designed to streamline software development and automate complex coding tasks could have been turned into a malicious insider, executing an attacker’s commands from deep within a company’s most sensitive codebases. This once-hypothetical scenario edged closer to reality following the discovery of a series of interconnected vulnerabilities in a key server component developed by the AI firm Anthropic. The flaws, unearthed by security researchers at Cyata, highlighted a new and sophisticated class of threat emerging from the complex interplay of modern AI systems.

This incident is not about a single, simple bug but rather a chain of weaknesses that, when linked together, could grant an attacker significant control over a development environment. The vulnerabilities resided in Anthropic’s official Git Model Context Protocol (MCP) server, a tool designed to allow AI agents to interact with code repositories. A successful exploit could have allowed an adversary, through a carefully crafted prompt, to trick an AI assistant into performing unauthorized actions, including executing arbitrary code and manipulating critical files.

When Your AI Coding Assistant Becomes an Attacker’s Inside Agent

The central conflict in today’s AI-driven development landscape is the dual nature of these powerful tools. While AI assistants like Claude and Copilot dramatically accelerate productivity, their integration into core workflows also creates an attractive new attack surface. The Anthropic case serves as a prime example, where a tool intended to be a helpful collaborator could be subverted to act as an inside agent for a malicious actor, operating with the trusted permissions of the system it was meant to serve.

The security advisory details how a sequence of vulnerabilities within Anthropic’s Git server could be chained to achieve this. Discovered by Cyata, a firm specializing in agentic AI security, the exploit path allowed for remote code execution (RCE) via an indirect prompt injection attack. Essentially, an attacker could plant malicious instructions in a data source the AI would later read, turning the AI into an unwitting accomplice that executes commands on the underlying server.

The Unseen Bridge: Understanding the AI Git Server’s Critical Role

At the heart of this issue is mcp-server-git, a server component acting as an essential bridge between AI models and the world of software development. Its purpose is to interpret requests from AI agents and translate them into concrete actions within Git repositories, such as reading code, analyzing file history, or even preparing new commits. This server is the invisible intermediary that gives an AI the “hands” it needs to interact with a developer’s codebase.

Granting an AI programmatic access to a sensitive development environment is both a revolutionary step forward and a decision laden with inherent risk. When secure, this bridge unlocks unprecedented automation and efficiency. However, when compromised, it provides a direct pathway for an attacker to manipulate the very foundation of a software project. The potential for unauthorized code execution, intellectual property theft, or the injection of subtle backdoors becomes a tangible threat, transforming the AI from a tool into a weapon.

Unpacking the Exploit: A Trio of Interrelated Flaws

The exploit chain meticulously documented by Cyata was built upon three distinct but interconnected vulnerabilities, each assigned a CVE identifier. The first, CVE-2025-68145, was a path validation bypass. The server was designed to operate within a specific, “sandboxed” repository, but it failed to properly check if subsequent commands stayed within that boundary. This oversight effectively allowed a manipulated AI to break out of its designated workspace and access any other Git project on the same filesystem.

The other two flaws provided the mechanisms to weaponize this access. CVE-2025-68143 was an unrestricted git_init command that permitted an attacker to initialize a new, malicious Git repository in any system directory, even sensitive ones. Following this discovery, Anthropic removed the git_init tool entirely. This was paired with CVE-2025-68144, an argument injection flaw in core Git functions that allowed an attacker to pass unsanitized commands. This could be used to overwrite arbitrary files, providing a way to corrupt data or, more critically, set the stage for code execution.

The Real Danger: Combinatorial Risk in Agentic AI Systems

This incident underscores a profound security lesson for the age of AI: individual components that seem secure in isolation can create dangerous, unforeseen attack vectors when combined. Yarden Porat, a security researcher at Cyata, described this phenomenon as a “toxic combination,” where the interaction between different AI tools or servers gives rise to emergent threats. In this context, the sum of the system’s parts presents a far greater risk than any single component.

This concept of “combinatorial risk” is a critical consideration for any organization integrating complex, multi-tool AI systems. As AI agents are granted access to more tools—a file system server, a Git server, a web browser—the potential for these multi-stage exploits grows exponentially. Securing such a system requires a paradigm shift, moving away from analyzing tools individually and toward understanding the holistic security posture of the entire interconnected agentic ecosystem.

Anatomy of the Attack and the Path to Mitigation

The complete attack chain cleverly synthesized these vulnerabilities through indirect prompt injection. An attacker would begin by planting malicious instructions in a data source the AI was expected to process, such as a GitHub issue or a project’s README file. Upon reading this poisoned data, the AI would be manipulated into executing a four-step attack: create a malicious Git repository in a sensitive location, use argument injection to overwrite the repository’s configuration file with code-executing filters, and finally, trigger the command execution by checking out a file in the rogue repository.

Cyata reported these issues to Anthropic in June 2025, and patches were released in the December 2025.12.18 version of mcp-server-git. Organizations using this component were strongly urged to update immediately to mitigate the risk. While there was no evidence of these flaws being exploited in the wild, the disclosure served as a powerful case study in the evolving threat landscape of agentic AI. It reinforced the need for security teams to move beyond evaluating individual tools and instead assess the “effective permissions” of the entire system, mapping how different AI capabilities can be chained together to create unintended and dangerous outcomes.

Advertisement

You Might Also Like

Advertisement
shape

Get our content freshly delivered to your inbox. Subscribe now ->

Receive the latest, most important information on cybersecurity.
shape shape