Anthropic has released a new "Code Review" feature for its Claude Code tool, now available in a research preview. The system uses teams of AI agents that work in parallel to automatically check developer pull requests for bugs and other potential issues. [4, 5] The feature aims to accelerate development cycles and catch errors that human reviewers might overlook. When a pull request is submitted, various agents detect bugs, verify the findings to filter out false positives, and rank issues by severity before presenting a consolidated summary. [4] Anthropic claims its internal tests showed the feature tripled meaningful code review feedback. [4] The service is billed on token usage, with a typical review costing between $15 and $25. [4]