
Anthropic has introduced a new security feature for its Claude AI, specifically designed to scan software codebases for vulnerabilities and recommend patching solutions.
Named Claude Code Security, this feature is initially accessible to a select group of enterprise and team clients for testing. Its development involved over a year of rigorous stress-testing by internal red teams, participation in cybersecurity Capture the Flag competitions, and collaboration with Pacific Northwest National Laboratory to enhance its scanning accuracy.
Over the last two years, large language models (LLMs) have demonstrated growing potential in both code generation and cybersecurity. This advancement has accelerated software development while simultaneously making it easier to create new digital tools like websites and applications.
In a recent blog post, Anthropic stated its expectation that a substantial portion of global code will soon be scanned by AI, attributing this to the models’ proven effectiveness in uncovering previously undetected bugs and security flaws.
However, these powerful capabilities also enable malicious actors to more quickly identify vulnerabilities within IT environments. Anthropic anticipates that with the increasing prevalence of AI-assisted coding, the need for automated vulnerability scanning will eventually surpass that for traditional manual security reviews.
As AI becomes more integral to software and application development, an integrated vulnerability scanner could significantly decrease the number of associated security flaws. The objective is to streamline much of the software security review process into a few simple actions, requiring user approval for any proposed patches or modifications before deployment.
Anthropic asserts that Claude Code Security operates by “reading and reasoning about code like a human researcher.” It aims to understand how various software components interact, track data flow, and detect significant bugs that might be overlooked by conventional static analysis methods.
The company further explained that “every finding undergoes a multi-stage verification process before being presented to an analyst. Claude re-evaluates each result, attempting to confirm or refute its own discoveries and eliminate false positives.” Additionally, findings are assigned severity ratings to help teams prioritize the most critical fixes.
Cybersecurity researchers have noted that while AI’s capabilities in this domain have advanced considerably, they are often most effective at identifying lower-impact bugs. Many organizations still require experienced human operators to manage these models and address more complex threats and vulnerabilities.
Nevertheless, tools such as Claude Opus and XBOW have demonstrated the capacity to uncover hundreds of software vulnerabilities. In some instances, these tools have accelerated the discovery and patching process exponentially compared to human teams.
Anthropic reported that Claude Opus 4.6 exhibits “notably better” performance in identifying high-severity vulnerabilities than its predecessors. This includes instances where it has uncovered flaws that remained “undetected for decades.”
Individuals interested in participating can apply for access to the program. Anthropic’s sign-up page specifies that testers must agree to use Claude Code Security exclusively on code owned by their company, for which they possess all necessary scanning rights. It is not to be used on third-party owned, licensed, or open-source projects.

