TwitterFacebookInstagramPinterestYouTubeTumblrRedditWhatsAppThreads

Anthropic Launches Claude Code Security In Research Preview

Anthropic Launches Claude Code Security In Research Preview

US-based AI firm Anthropic has introduced Claude Code Security, a new feature within its web-based Claude Code platform aimed at identifying and fixing software vulnerabilities. The capability is currently available in limited research preview for Enterprise and Team customers, with expedited access for select open-source maintainers.

Claude Code Security scans entire codebases to detect security vulnerabilities and recommends targeted patches. However, no fixes are applied automatically — human developers must review and approve every suggested change before implementation.

Anthropic Rolls Out Claude Sonnet 4.6 With Enhanced Computer-Use Capabilities

Anthropic said the tool is designed to address a persistent industry challenge: security teams facing a growing number of vulnerabilities with limited resources. Traditional static analysis tools rely on rule-based systems that match code against known vulnerability patterns. While effective for common issues such as exposed credentials or outdated encryption, these tools often fail to detect more complex problems, including flaws in business logic, broken access control, and subtle multi-component data flow issues.

How The AI System Works And The Risks Involved

Unlike traditional scanners, Claude Code Security is built to “read and reason” about software more like a human reviewer. It analyses how different components of an application interact, traces data movement across systems, and flags indirect or complex weaknesses.

When a potential vulnerability is identified, the system runs a multi-stage verification process to confirm or disprove its own findings. It then assigns a severity rating, provides a confidence score, and suggests a targeted software patch. Developers can review all findings through a dashboard before deciding whether to apply fixes.

Anthropic said it has been testing Claude’s cybersecurity capabilities for over a year using its latest model, Claude Opus 4.6. According to the company, the system identified more than 500 vulnerabilities in production open-source codebases, some of which had reportedly gone undetected for years. The company added that it is working through responsible disclosure processes with maintainers.

However, Anthropic also acknowledged the dual-use risk of AI in cybersecurity. The same advanced reasoning capabilities that help defenders detect vulnerabilities could also assist attackers in identifying and exploiting them at scale. Threat actors using similar AI tools could accelerate zero-day discovery, automate large-scale scanning of repositories, exploit business logic flaws, and significantly reduce the time between vulnerability discovery and exploitation.

Following the announcement, shares of several cybersecurity firms — including CrowdStrike, Cloudflare, Zscaler, Palo Alto Networks, Okta, GitLab, JFrog, and Rubrik — declined, reflecting investor concerns about how AI-driven security tools may reshape the cybersecurity market landscape.

VoM News Desk
VoM News Desk

VoM News is an online web portal in jammu Kashmir offers regional, National & global news.

Scroll to Top