Anthropic has released a new "Code Review" feature for its Claude Code tool, now available in a research preview. The system uses teams of AI agents that work in parallel to automatically check developer pull requests for bugs and other potential issues. [4, 5] The feature aims to accelerate development cycles and catch errors that human reviewers might overlook. When a pull request is submitted, various agents detect bugs, verify the findings to filter out false positives, and rank issues by severity before presenting a consolidated summary. [4] Anthropic claims its internal tests showed the feature tripled meaningful code review feedback. [4] The service is billed on token usage, with a typical review costing between $15 and $25. [4]
Anthropic Deploys AI Agent Teams to Automate Code Reviews
MSFT
Related News
MSFT
🟢 MSFT is trading 3.45% up today on US-Iran ceasefire and broad market rally
MSFT
Bill Gates to Testify Before Congress in June on Epstein Ties
MSFT
Anthropic Hires Microsoft’s Eric Boyd, Aiming to Scale AI Infrastructure
MSFT
Z.ai debuts open-source GLM-5.1, beating OpenAI on coding benchmarks
MSFT