Anthropic launched Code Review yesterday. A managed, multi-agent service that analyzes GitHub PRs, finds bugs, and posts inline comments. Average completion time: 20 minutes. Average cost: $15-25 per review. Average availability to individual subscribers: zero.
Three weeks ago they launched Claude Code Security. An Opus 4.6-powered vulnerability scanner with a full dashboard, multi-stage verification, and proactive scanning that found 500+ previously unknown high-severity vulnerabilities in open-source codebases. Also enterprise-only.
Individual users get a /security-review slash command. Same name. Entirely different product.
The Pattern
Look at what’s behind the enterprise paywall:
- Code Review: Multi-agent PR analysis with severity-ranked inline comments
- Claude Code Security: Opus-powered scanning with dashboard and patch suggestions
- Opus model access via third-party tools: Blocked in January 2026 for subscription users
Now look at what competitors gate behind enterprise:
- GitHub Copilot Enterprise: SSO, audit trails, IP indemnification, zero data retention
- Cursor Enterprise: Custom deployment, compliance controls
- Windsurf Enterprise: FedRAMP High, on-premise deployment
The distinction is stark. Copilot, Cursor, and Windsurf charge enterprise customers for governance and compliance. The AI itself is the same. Anthropic charges enterprise customers for better AI. The individual subscriber gets a materially worse product, not just fewer admin controls.
Nobody objects to enterprise plans costing more. The issue is what you’re paying for. When Copilot Enterprise costs $39/user, you’re getting the same AI plus SOC2 compliance. When Anthropic locks multi-agent review and full security scanning behind enterprise, you’re getting a worse AI plus a prayer that it trickles down eventually.
The Loss Leader Lifecycle
The $200 Max plan was never sustainable. Developers estimated it delivered over $1,000 in API-equivalent value per month. Anthropic knew this. The plan existed to build habits, lock in workflows, and create switching costs.
Phase one worked. Claude Code now generates 4% of all public GitHub commits. Roughly 135,000 commits per day. The tool is embedded in developer workflows deeply enough that abandoning it means rewriting muscle memory.
Phase two arrived in January 2026. Anthropic blocked Opus access through third-party tools. Max subscribers who had upgraded specifically to use Opus via OpenCode or Cursor had their workflows broken overnight without warning.
— Hacker News user stavrosI pay them $100 a month and now for some reason I can’t use OpenCode? Fuck that.
Phase three is happening now. The best new features ship exclusively to enterprise. Individual subscribers fund the R&D through their subscriptions and usage data, then watch the results land behind a “contact sales” page.
Consumer as Funnel
Anthropic’s revenue trajectory tells the story: $1B annualized in December 2024, $14B by February 2026, targeting $20-26B by year-end. 80% comes from enterprise customers. 500+ companies spend over $1M annually. Eight of ten Fortune 10 companies are Claude customers.
The internal framing, per Sacra’s research: consumer products are “enterprise lead generation.”
That framing has leaked into product decisions in ways individual subscribers can now feel directly. Code Review doesn’t have a “coming soon to Pro” roadmap. Security scanning doesn’t have a lighter individual tier beyond the slash command. These aren’t features in beta testing before wider rollout. They’re features built for the customer segment that generates 80% of revenue.
Anthropic raised $30B at a $380B valuation in February 2026. They’ve hired IPO counsel. They’re projecting $70B revenue by 2028. Public market investors want enterprise revenue quality: high contract values, low churn, predictable growth. Individual subscriptions at $200/month don’t tell that story. $1M+ enterprise contracts do.
The OpenCode Signal
OpenCode now has 112,000 GitHub stars. Claude Code has 71,000. The open-source alternative has nearly 60% more community support from developers who want to use Claude’s models without Claude’s restrictions.
But stars don’t equal usage. Claude Code’s 135,000 daily commits dwarf OpenCode’s footprint. The tool that developers prefer in principle isn’t the tool they use in practice. Which may be exactly the point: by the time the lock-in is uncomfortable enough to leave, the switching cost is too high.
— Hacker News user bluelightning2kClaude Code is lock-in where Anthropic takes all the value. If the frontend and API are decoupled, they are one benchmark away from losing half their users.
This is the classic platform play. Subsidize adoption, restrict interoperability, monetize the captive audience. Microsoft did it with Windows. Apple does it with the App Store. Anthropic is doing it with AI capability tiers.
What Code Review Actually Does
The product itself is impressive, which makes the gating more frustrating.
A fleet of parallel agents analyze each PR against the full codebase. Each agent checks for a different class of issue: logic errors, security vulnerabilities, regressions, edge cases. A verification step filters false positives. Results are deduplicated, severity-ranked, and posted as inline comments.
Anthropic’s internal numbers: substantive feedback jumped from 16% to 54% of PRs. On PRs over 1,000 lines, it finds issues 84% of the time with an average of 7.5 findings. Engineer disagreement rate: under 1%.
Setup is admin-only. Install the GitHub App, select repos, choose trigger mode. Developers need zero configuration. You can customize what it flags via a REVIEW.md file in your repo. Violations of your existing CLAUDE.md are flagged as nits.
This is the kind of tool that would make individual developers meaningfully better at shipping quality code. Instead, it’s the kind of tool that makes enterprise sales calls more compelling.
The Uncomfortable Question
I use Claude Code daily. I pay for Max. I’ve written multiple posts on this blog about how it’s changed the way I work. I’m writing this post with it right now.
And I’m watching the product I depend on systematically deprioritize users like me in favor of companies buying thousands of seats. That’s rational business strategy. It’s also worth naming clearly, because Anthropic’s brand is built on being the “ethical AI company.” The ethics of how you treat your own users matters too.
The $200/month subscriber isn’t a charity case. They’re a paying customer who chose Anthropic over cheaper alternatives because they believed the product would keep getting better for them. When the best features consistently ship to enterprise-only, that belief erodes.
— Hacker News user bakugoThe $200 plan is a loss leader to build ecosystem loyalty before prices inevitably rise.
Every platform company eventually has to decide: are individual users customers or product? Anthropic hasn’t answered that question explicitly. But the feature roadmap is answering it for them.


